Jan 24 06:53:17 crc systemd[1]: Starting Kubernetes Kubelet... Jan 24 06:53:17 crc restorecon[4588]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:17 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 06:53:18 crc restorecon[4588]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 06:53:18 crc restorecon[4588]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 24 06:53:18 crc kubenswrapper[4675]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 06:53:18 crc kubenswrapper[4675]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 24 06:53:18 crc kubenswrapper[4675]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 06:53:18 crc kubenswrapper[4675]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 06:53:18 crc kubenswrapper[4675]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 24 06:53:18 crc kubenswrapper[4675]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.798849 4675 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801826 4675 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801843 4675 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801848 4675 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801852 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801856 4675 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801861 4675 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801866 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801871 4675 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801875 4675 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801880 4675 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801884 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801888 4675 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801892 4675 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801903 4675 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801907 4675 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801911 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801916 4675 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801921 4675 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801925 4675 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801929 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801933 4675 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801937 4675 feature_gate.go:330] unrecognized feature gate: Example Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801941 4675 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801945 4675 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801949 4675 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801952 4675 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801956 4675 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801960 4675 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801964 4675 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801968 4675 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801971 4675 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801975 4675 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801978 4675 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801982 4675 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801986 4675 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801990 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801994 4675 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.801998 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802001 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802005 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802008 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802012 4675 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802016 4675 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802021 4675 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802025 4675 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802029 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802033 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802037 4675 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802041 4675 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802050 4675 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802054 4675 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802062 4675 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802066 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802069 4675 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802074 4675 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802082 4675 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802086 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802090 4675 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802094 4675 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802097 4675 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802101 4675 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802104 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802108 4675 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802111 4675 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802115 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802118 4675 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802121 4675 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802124 4675 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802128 4675 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802133 4675 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.802138 4675 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802352 4675 flags.go:64] FLAG: --address="0.0.0.0" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802362 4675 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802373 4675 flags.go:64] FLAG: --anonymous-auth="true" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802378 4675 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802384 4675 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802388 4675 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802393 4675 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802398 4675 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802403 4675 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802407 4675 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802411 4675 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802415 4675 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802419 4675 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802423 4675 flags.go:64] FLAG: --cgroup-root="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802433 4675 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802437 4675 flags.go:64] FLAG: --client-ca-file="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802441 4675 flags.go:64] FLAG: --cloud-config="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802444 4675 flags.go:64] FLAG: --cloud-provider="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802448 4675 flags.go:64] FLAG: --cluster-dns="[]" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802456 4675 flags.go:64] FLAG: --cluster-domain="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802459 4675 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802464 4675 flags.go:64] FLAG: --config-dir="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802468 4675 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802472 4675 flags.go:64] FLAG: --container-log-max-files="5" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802477 4675 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802481 4675 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802485 4675 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802489 4675 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802494 4675 flags.go:64] FLAG: --contention-profiling="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802498 4675 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802502 4675 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802506 4675 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802510 4675 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802518 4675 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802522 4675 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802526 4675 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802530 4675 flags.go:64] FLAG: --enable-load-reader="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802534 4675 flags.go:64] FLAG: --enable-server="true" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802538 4675 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802546 4675 flags.go:64] FLAG: --event-burst="100" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802550 4675 flags.go:64] FLAG: --event-qps="50" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802554 4675 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802558 4675 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802562 4675 flags.go:64] FLAG: --eviction-hard="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802567 4675 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802571 4675 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802575 4675 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802579 4675 flags.go:64] FLAG: --eviction-soft="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802583 4675 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802587 4675 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802596 4675 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802600 4675 flags.go:64] FLAG: --experimental-mounter-path="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802604 4675 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802609 4675 flags.go:64] FLAG: --fail-swap-on="true" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802613 4675 flags.go:64] FLAG: --feature-gates="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802618 4675 flags.go:64] FLAG: --file-check-frequency="20s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802622 4675 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802627 4675 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802631 4675 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802635 4675 flags.go:64] FLAG: --healthz-port="10248" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802639 4675 flags.go:64] FLAG: --help="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802643 4675 flags.go:64] FLAG: --hostname-override="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802647 4675 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802651 4675 flags.go:64] FLAG: --http-check-frequency="20s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802655 4675 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802659 4675 flags.go:64] FLAG: --image-credential-provider-config="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802663 4675 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802667 4675 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802671 4675 flags.go:64] FLAG: --image-service-endpoint="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802675 4675 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802679 4675 flags.go:64] FLAG: --kube-api-burst="100" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802683 4675 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802687 4675 flags.go:64] FLAG: --kube-api-qps="50" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802690 4675 flags.go:64] FLAG: --kube-reserved="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802694 4675 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802698 4675 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802702 4675 flags.go:64] FLAG: --kubelet-cgroups="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802706 4675 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802711 4675 flags.go:64] FLAG: --lock-file="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802715 4675 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802738 4675 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802743 4675 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802751 4675 flags.go:64] FLAG: --log-json-split-stream="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802757 4675 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802762 4675 flags.go:64] FLAG: --log-text-split-stream="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802767 4675 flags.go:64] FLAG: --logging-format="text" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802780 4675 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802785 4675 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802789 4675 flags.go:64] FLAG: --manifest-url="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802793 4675 flags.go:64] FLAG: --manifest-url-header="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802798 4675 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802803 4675 flags.go:64] FLAG: --max-open-files="1000000" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802808 4675 flags.go:64] FLAG: --max-pods="110" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802812 4675 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802816 4675 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802820 4675 flags.go:64] FLAG: --memory-manager-policy="None" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802824 4675 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802828 4675 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802832 4675 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802835 4675 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802846 4675 flags.go:64] FLAG: --node-status-max-images="50" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802850 4675 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802854 4675 flags.go:64] FLAG: --oom-score-adj="-999" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802858 4675 flags.go:64] FLAG: --pod-cidr="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802862 4675 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802868 4675 flags.go:64] FLAG: --pod-manifest-path="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802873 4675 flags.go:64] FLAG: --pod-max-pids="-1" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802877 4675 flags.go:64] FLAG: --pods-per-core="0" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802881 4675 flags.go:64] FLAG: --port="10250" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802885 4675 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802889 4675 flags.go:64] FLAG: --provider-id="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802893 4675 flags.go:64] FLAG: --qos-reserved="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802905 4675 flags.go:64] FLAG: --read-only-port="10255" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802909 4675 flags.go:64] FLAG: --register-node="true" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802913 4675 flags.go:64] FLAG: --register-schedulable="true" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802917 4675 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802928 4675 flags.go:64] FLAG: --registry-burst="10" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802932 4675 flags.go:64] FLAG: --registry-qps="5" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802936 4675 flags.go:64] FLAG: --reserved-cpus="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802941 4675 flags.go:64] FLAG: --reserved-memory="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802945 4675 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802949 4675 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802959 4675 flags.go:64] FLAG: --rotate-certificates="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802963 4675 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802967 4675 flags.go:64] FLAG: --runonce="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802971 4675 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802975 4675 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802979 4675 flags.go:64] FLAG: --seccomp-default="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802983 4675 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802987 4675 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802991 4675 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802995 4675 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.802999 4675 flags.go:64] FLAG: --storage-driver-password="root" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803003 4675 flags.go:64] FLAG: --storage-driver-secure="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803006 4675 flags.go:64] FLAG: --storage-driver-table="stats" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803010 4675 flags.go:64] FLAG: --storage-driver-user="root" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803014 4675 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803018 4675 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803022 4675 flags.go:64] FLAG: --system-cgroups="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803026 4675 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803033 4675 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803037 4675 flags.go:64] FLAG: --tls-cert-file="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803041 4675 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803048 4675 flags.go:64] FLAG: --tls-min-version="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803052 4675 flags.go:64] FLAG: --tls-private-key-file="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803056 4675 flags.go:64] FLAG: --topology-manager-policy="none" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803060 4675 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803064 4675 flags.go:64] FLAG: --topology-manager-scope="container" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803068 4675 flags.go:64] FLAG: --v="2" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803074 4675 flags.go:64] FLAG: --version="false" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803079 4675 flags.go:64] FLAG: --vmodule="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803084 4675 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803089 4675 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803212 4675 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803218 4675 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803222 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803225 4675 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803229 4675 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803239 4675 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803243 4675 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803248 4675 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803252 4675 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803257 4675 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803260 4675 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803265 4675 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803268 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803276 4675 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803279 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803283 4675 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803286 4675 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803290 4675 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803293 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803297 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803300 4675 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803304 4675 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803307 4675 feature_gate.go:330] unrecognized feature gate: Example Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803310 4675 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803314 4675 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803317 4675 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803321 4675 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803324 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803327 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803332 4675 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803336 4675 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803340 4675 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803344 4675 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803347 4675 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803350 4675 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803354 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803357 4675 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803361 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803364 4675 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803368 4675 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803371 4675 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803380 4675 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803384 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803388 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803392 4675 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803397 4675 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803401 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803405 4675 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803409 4675 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803413 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803417 4675 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803421 4675 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803425 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803428 4675 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803432 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803435 4675 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803439 4675 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803442 4675 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803446 4675 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803450 4675 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803455 4675 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803459 4675 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803463 4675 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803467 4675 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803471 4675 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803475 4675 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803479 4675 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803482 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803486 4675 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803490 4675 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.803493 4675 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.803624 4675 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.811440 4675 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.811785 4675 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811873 4675 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811880 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811884 4675 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811888 4675 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811892 4675 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811896 4675 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811900 4675 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811904 4675 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811907 4675 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811910 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811914 4675 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811918 4675 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811921 4675 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811925 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811929 4675 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811932 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811936 4675 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811939 4675 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811943 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811946 4675 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811949 4675 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811953 4675 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811957 4675 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811960 4675 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811963 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811967 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811970 4675 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811974 4675 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811977 4675 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811982 4675 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811990 4675 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.811996 4675 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812000 4675 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812004 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812008 4675 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812012 4675 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812017 4675 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812021 4675 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812025 4675 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812029 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812033 4675 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812036 4675 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812040 4675 feature_gate.go:330] unrecognized feature gate: Example Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812043 4675 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812047 4675 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812050 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812054 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812058 4675 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812061 4675 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812065 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812068 4675 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812072 4675 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812076 4675 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812079 4675 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812083 4675 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812087 4675 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812091 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812094 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812098 4675 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812101 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812105 4675 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812108 4675 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812112 4675 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812118 4675 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812123 4675 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812128 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812132 4675 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812136 4675 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812140 4675 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812143 4675 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812148 4675 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.812154 4675 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812273 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812279 4675 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812283 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812287 4675 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812291 4675 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812294 4675 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812299 4675 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812302 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812306 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812309 4675 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812313 4675 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812317 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812320 4675 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812323 4675 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812327 4675 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812333 4675 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812338 4675 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812342 4675 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812347 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812351 4675 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812355 4675 feature_gate.go:330] unrecognized feature gate: Example Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812359 4675 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812363 4675 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812367 4675 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812371 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812375 4675 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812378 4675 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812382 4675 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812385 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812389 4675 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812392 4675 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812396 4675 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812400 4675 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812403 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812407 4675 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812411 4675 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812416 4675 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812426 4675 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812430 4675 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812435 4675 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812439 4675 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812443 4675 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812447 4675 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812451 4675 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812454 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812458 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812462 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812466 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812470 4675 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812473 4675 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812477 4675 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812480 4675 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812483 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812487 4675 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812491 4675 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812495 4675 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812498 4675 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812502 4675 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812506 4675 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812509 4675 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812512 4675 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812516 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812521 4675 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812525 4675 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812529 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812533 4675 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812537 4675 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812540 4675 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812544 4675 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812548 4675 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.812551 4675 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.812557 4675 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.812924 4675 server.go:940] "Client rotation is on, will bootstrap in background" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.815124 4675 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.815200 4675 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.815637 4675 server.go:997] "Starting client certificate rotation" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.815660 4675 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.815971 4675 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-06 02:00:43.118190722 +0000 UTC Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.816040 4675 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.820036 4675 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.821337 4675 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 24 06:53:18 crc kubenswrapper[4675]: E0124 06:53:18.821486 4675 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.68:6443: connect: connection refused" logger="UnhandledError" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.828063 4675 log.go:25] "Validated CRI v1 runtime API" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.842675 4675 log.go:25] "Validated CRI v1 image API" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.844122 4675 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.846629 4675 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-24-06-48-08-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.846686 4675 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.866359 4675 manager.go:217] Machine: {Timestamp:2026-01-24 06:53:18.865212207 +0000 UTC m=+0.161317470 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:162c3bb2-7c82-48b0-b2c6-851c52c6f34e BootID:79a1b90b-9d8a-4b28-bda7-61ba2f3990af Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:c4:68:e6 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:c4:68:e6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d0:be:81 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:cb:66:04 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d6:34:8f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:cc:38:8c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:9a:51:64:4e:a0:42 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0e:3c:7e:f4:73:3d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.866632 4675 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.866949 4675 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.867514 4675 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.867858 4675 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.867911 4675 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.868183 4675 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.868197 4675 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.868394 4675 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.868431 4675 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.868709 4675 state_mem.go:36] "Initialized new in-memory state store" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.868833 4675 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.869457 4675 kubelet.go:418] "Attempting to sync node with API server" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.869479 4675 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.869505 4675 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.869520 4675 kubelet.go:324] "Adding apiserver pod source" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.869535 4675 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.871845 4675 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.872140 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.68:6443: connect: connection refused Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.872284 4675 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 24 06:53:18 crc kubenswrapper[4675]: E0124 06:53:18.872279 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.68:6443: connect: connection refused" logger="UnhandledError" Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.872119 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.68:6443: connect: connection refused Jan 24 06:53:18 crc kubenswrapper[4675]: E0124 06:53:18.872369 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.68:6443: connect: connection refused" logger="UnhandledError" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.873508 4675 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.874339 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.874380 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.874399 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.874423 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.874455 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.874474 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.874492 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.874528 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.874544 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.874556 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.874572 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.874584 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.875110 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.875988 4675 server.go:1280] "Started kubelet" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.876394 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.68:6443: connect: connection refused Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.877323 4675 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.877489 4675 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 06:53:18 crc systemd[1]: Started Kubernetes Kubelet. Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.878340 4675 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.882040 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.882087 4675 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.882291 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:10:04.44641482 +0000 UTC Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.882675 4675 server.go:460] "Adding debug handlers to kubelet server" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.883697 4675 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.883749 4675 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.885007 4675 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 06:53:18 crc kubenswrapper[4675]: E0124 06:53:18.888051 4675 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.888112 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.68:6443: connect: connection refused Jan 24 06:53:18 crc kubenswrapper[4675]: E0124 06:53:18.888335 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="200ms" Jan 24 06:53:18 crc kubenswrapper[4675]: E0124 06:53:18.888649 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.68:6443: connect: connection refused" logger="UnhandledError" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.893027 4675 factory.go:55] Registering systemd factory Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.893067 4675 factory.go:221] Registration of the systemd container factory successfully Jan 24 06:53:18 crc kubenswrapper[4675]: E0124 06:53:18.890151 4675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.68:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d9838ec3ec29e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 06:53:18.875599518 +0000 UTC m=+0.171704751,LastTimestamp:2026-01-24 06:53:18.875599518 +0000 UTC m=+0.171704751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.894472 4675 factory.go:153] Registering CRI-O factory Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.894495 4675 factory.go:221] Registration of the crio container factory successfully Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.894568 4675 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.894589 4675 factory.go:103] Registering Raw factory Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.894607 4675 manager.go:1196] Started watching for new ooms in manager Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.895280 4675 manager.go:319] Starting recovery of all containers Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.906184 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.906267 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.906291 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.906311 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.906330 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.906349 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.906368 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.906412 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.906434 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.906452 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.906470 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909570 4675 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909614 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909674 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909706 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909784 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909802 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909818 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909836 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909853 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909869 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909886 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909902 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909920 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909938 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.909991 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910007 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910037 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910101 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910114 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910129 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910189 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910202 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910215 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910228 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910241 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910274 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910288 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910302 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910314 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910326 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910338 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910368 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910383 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910411 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910424 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910439 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910483 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910501 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910517 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910529 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910542 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910580 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910636 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910650 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910663 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910676 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910688 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910701 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910747 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910775 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910787 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910800 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910814 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910826 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910864 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910894 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910905 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910930 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910943 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910955 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910968 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.910980 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911015 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911039 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911053 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911076 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911088 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911101 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911114 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911126 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911139 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911151 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911163 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911220 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911234 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911246 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911257 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911269 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911280 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911292 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911310 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911341 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911358 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911370 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911381 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911393 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911405 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911432 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911443 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911477 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911489 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911501 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911514 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911525 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911623 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911659 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911703 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911746 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911765 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911818 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911835 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911849 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911861 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911874 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911886 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911919 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911932 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911942 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911955 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911966 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.911979 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912021 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912034 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912064 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912076 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912088 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912107 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912119 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912131 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912170 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912186 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912216 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912245 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912257 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912300 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912312 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912324 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912350 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912362 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912391 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912405 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912416 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912428 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912439 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912451 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912465 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912511 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912559 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912622 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912635 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912681 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912695 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912707 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912743 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912755 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912791 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912804 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912815 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912827 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912891 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912907 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912935 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.912980 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913008 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913031 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913044 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913055 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913069 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913082 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913108 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913120 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913156 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913205 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913217 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913229 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913240 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913251 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913263 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913276 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913304 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913333 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913345 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913356 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913370 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913390 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913407 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913420 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913453 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913465 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913504 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913561 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913583 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913601 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913632 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913645 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913671 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913702 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913778 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913826 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913840 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913852 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913874 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913885 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913923 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913942 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913954 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913965 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913977 4675 reconstruct.go:97] "Volume reconstruction finished" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.913987 4675 reconciler.go:26] "Reconciler: start to sync state" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.927819 4675 manager.go:324] Recovery completed Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.938988 4675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.940949 4675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.941075 4675 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.941179 4675 kubelet.go:2335] "Starting kubelet main sync loop" Jan 24 06:53:18 crc kubenswrapper[4675]: E0124 06:53:18.941302 4675 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.941466 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:18 crc kubenswrapper[4675]: W0124 06:53:18.942617 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.68:6443: connect: connection refused Jan 24 06:53:18 crc kubenswrapper[4675]: E0124 06:53:18.942700 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.68:6443: connect: connection refused" logger="UnhandledError" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.944833 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.944863 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.944878 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.946177 4675 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.946199 4675 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.946219 4675 state_mem.go:36] "Initialized new in-memory state store" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.955574 4675 policy_none.go:49] "None policy: Start" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.956207 4675 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.956228 4675 state_mem.go:35] "Initializing new in-memory state store" Jan 24 06:53:18 crc kubenswrapper[4675]: E0124 06:53:18.988474 4675 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.997564 4675 manager.go:334] "Starting Device Plugin manager" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.997623 4675 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.997635 4675 server.go:79] "Starting device plugin registration server" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.998026 4675 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.998073 4675 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.998730 4675 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.998799 4675 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 24 06:53:18 crc kubenswrapper[4675]: I0124 06:53:18.998807 4675 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 06:53:19 crc kubenswrapper[4675]: E0124 06:53:19.005217 4675 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.041904 4675 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.042040 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.043215 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.043245 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.043255 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.043368 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.043771 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.043836 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.044346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.044387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.044398 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.044540 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.044671 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.044709 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.045049 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.045086 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.045101 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.045369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.045391 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.045402 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.045387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.045475 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.045486 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.045615 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.045765 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.045803 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.046498 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.046516 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.046524 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.046526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.046546 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.046558 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.046657 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.046823 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.046879 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.047414 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.047650 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.047672 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.047704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.047756 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.047776 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.047815 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.047840 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.048533 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.048565 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.048579 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: E0124 06:53:19.089300 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="400ms" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.098381 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.099527 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.099567 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.099579 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.099607 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 06:53:19 crc kubenswrapper[4675]: E0124 06:53:19.100200 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.68:6443: connect: connection refused" node="crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116298 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116352 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116375 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116393 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116409 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116425 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116481 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116501 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116541 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116564 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116581 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116618 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116673 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116700 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.116766 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.218224 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.218564 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.218581 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.218894 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219000 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219023 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219098 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219126 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219182 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219213 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219244 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219225 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219307 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219377 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219433 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219456 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219477 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219500 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219529 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219551 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219519 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219602 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219679 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219796 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219837 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219472 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219886 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219859 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.219898 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.220245 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.300896 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.303037 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.303174 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.303257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.303301 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 06:53:19 crc kubenswrapper[4675]: E0124 06:53:19.304265 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.68:6443: connect: connection refused" node="crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.384011 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.388607 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: W0124 06:53:19.421520 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f8b2b023b31f903e26f1fbb108c69d46bf6bc22b2f8714aac90be0c0f4af9c54 WatchSource:0}: Error finding container f8b2b023b31f903e26f1fbb108c69d46bf6bc22b2f8714aac90be0c0f4af9c54: Status 404 returned error can't find the container with id f8b2b023b31f903e26f1fbb108c69d46bf6bc22b2f8714aac90be0c0f4af9c54 Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.423470 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: W0124 06:53:19.426854 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-2ad51a7cbb0561e2bf4219119614d7879d250340533635bf13ea5550dd5e10fe WatchSource:0}: Error finding container 2ad51a7cbb0561e2bf4219119614d7879d250340533635bf13ea5550dd5e10fe: Status 404 returned error can't find the container with id 2ad51a7cbb0561e2bf4219119614d7879d250340533635bf13ea5550dd5e10fe Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.430443 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.434244 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 24 06:53:19 crc kubenswrapper[4675]: W0124 06:53:19.444188 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c7aeab8467a18f505d511d97213d276ef0c8e342a1f812cf326a11db270658dd WatchSource:0}: Error finding container c7aeab8467a18f505d511d97213d276ef0c8e342a1f812cf326a11db270658dd: Status 404 returned error can't find the container with id c7aeab8467a18f505d511d97213d276ef0c8e342a1f812cf326a11db270658dd Jan 24 06:53:19 crc kubenswrapper[4675]: W0124 06:53:19.445438 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b5e2e326eb941d01f2de6eb725618e2bbaac4fad779d52ab4083e4844369d1ae WatchSource:0}: Error finding container b5e2e326eb941d01f2de6eb725618e2bbaac4fad779d52ab4083e4844369d1ae: Status 404 returned error can't find the container with id b5e2e326eb941d01f2de6eb725618e2bbaac4fad779d52ab4083e4844369d1ae Jan 24 06:53:19 crc kubenswrapper[4675]: W0124 06:53:19.453648 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-fd3fa45850b44de92102c864f36441ffa651907a4a45686a3d49c96547d084b2 WatchSource:0}: Error finding container fd3fa45850b44de92102c864f36441ffa651907a4a45686a3d49c96547d084b2: Status 404 returned error can't find the container with id fd3fa45850b44de92102c864f36441ffa651907a4a45686a3d49c96547d084b2 Jan 24 06:53:19 crc kubenswrapper[4675]: E0124 06:53:19.490856 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="800ms" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.704965 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.706444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.706478 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.706489 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.706513 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 06:53:19 crc kubenswrapper[4675]: E0124 06:53:19.706953 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.68:6443: connect: connection refused" node="crc" Jan 24 06:53:19 crc kubenswrapper[4675]: W0124 06:53:19.861886 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.68:6443: connect: connection refused Jan 24 06:53:19 crc kubenswrapper[4675]: E0124 06:53:19.861950 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.68:6443: connect: connection refused" logger="UnhandledError" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.877604 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.68:6443: connect: connection refused Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.883757 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:08:49.117511352 +0000 UTC Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.948558 4675 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398" exitCode=0 Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.948639 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398"} Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.948758 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fd3fa45850b44de92102c864f36441ffa651907a4a45686a3d49c96547d084b2"} Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.948863 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.950950 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.950974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.950983 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.952473 4675 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf" exitCode=0 Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.952536 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf"} Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.952557 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b5e2e326eb941d01f2de6eb725618e2bbaac4fad779d52ab4083e4844369d1ae"} Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.952622 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.953350 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.953373 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.953383 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.954110 4675 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9" exitCode=0 Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.954161 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9"} Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.954177 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c7aeab8467a18f505d511d97213d276ef0c8e342a1f812cf326a11db270658dd"} Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.954241 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.955067 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.955104 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.955311 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.957041 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a"} Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.957067 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f8b2b023b31f903e26f1fbb108c69d46bf6bc22b2f8714aac90be0c0f4af9c54"} Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.959089 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993" exitCode=0 Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.959116 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993"} Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.959135 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2ad51a7cbb0561e2bf4219119614d7879d250340533635bf13ea5550dd5e10fe"} Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.959217 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.959809 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.959837 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.959847 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.961268 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.962012 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.962034 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:19 crc kubenswrapper[4675]: I0124 06:53:19.962044 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:20 crc kubenswrapper[4675]: W0124 06:53:20.123998 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.68:6443: connect: connection refused Jan 24 06:53:20 crc kubenswrapper[4675]: E0124 06:53:20.124164 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.68:6443: connect: connection refused" logger="UnhandledError" Jan 24 06:53:20 crc kubenswrapper[4675]: E0124 06:53:20.292474 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="1.6s" Jan 24 06:53:20 crc kubenswrapper[4675]: W0124 06:53:20.413923 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.68:6443: connect: connection refused Jan 24 06:53:20 crc kubenswrapper[4675]: E0124 06:53:20.414419 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.68:6443: connect: connection refused" logger="UnhandledError" Jan 24 06:53:20 crc kubenswrapper[4675]: W0124 06:53:20.467646 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.68:6443: connect: connection refused Jan 24 06:53:20 crc kubenswrapper[4675]: E0124 06:53:20.467804 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.68:6443: connect: connection refused" logger="UnhandledError" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.508793 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.510293 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.510332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.510343 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.510369 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 06:53:20 crc kubenswrapper[4675]: E0124 06:53:20.510885 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.68:6443: connect: connection refused" node="crc" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.884546 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:47:59.923353287 +0000 UTC Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.963612 4675 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863" exitCode=0 Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.963695 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863"} Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.963886 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.964830 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.964868 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.964879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.965676 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801"} Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.965860 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.966534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.966552 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.966561 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.969696 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f"} Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.969765 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb"} Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.969782 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00"} Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.969874 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.971076 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.971105 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.971116 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.972072 4675 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.972081 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e"} Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.972118 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30"} Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.972125 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.972130 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add"} Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.974771 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.974804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.974815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.977279 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f"} Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.977313 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc"} Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.977325 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76"} Jan 24 06:53:20 crc kubenswrapper[4675]: I0124 06:53:20.977337 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63"} Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.442381 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.884839 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:35:30.470006528 +0000 UTC Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.982003 4675 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1" exitCode=0 Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.982098 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1"} Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.982312 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.984096 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.984140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.984153 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.991266 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.991289 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.991323 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.991440 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.991194 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b"} Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.992231 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.992272 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.992291 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.992495 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.992522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.992541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.992631 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.992656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:21 crc kubenswrapper[4675]: I0124 06:53:21.992666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:22 crc kubenswrapper[4675]: I0124 06:53:22.111287 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:22 crc kubenswrapper[4675]: I0124 06:53:22.112471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:22 crc kubenswrapper[4675]: I0124 06:53:22.112503 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:22 crc kubenswrapper[4675]: I0124 06:53:22.112512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:22 crc kubenswrapper[4675]: I0124 06:53:22.112534 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 06:53:22 crc kubenswrapper[4675]: I0124 06:53:22.841957 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:22 crc kubenswrapper[4675]: I0124 06:53:22.847408 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:22 crc kubenswrapper[4675]: I0124 06:53:22.885785 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:11:53.673494667 +0000 UTC Jan 24 06:53:22 crc kubenswrapper[4675]: I0124 06:53:22.999001 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e"} Jan 24 06:53:22 crc kubenswrapper[4675]: I0124 06:53:22.999064 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2"} Jan 24 06:53:22 crc kubenswrapper[4675]: I0124 06:53:22.999088 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5"} Jan 24 06:53:22 crc kubenswrapper[4675]: I0124 06:53:22.999128 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 06:53:22 crc kubenswrapper[4675]: I0124 06:53:22.999197 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:23 crc kubenswrapper[4675]: I0124 06:53:22.999888 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:23 crc kubenswrapper[4675]: I0124 06:53:23.000796 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:23 crc kubenswrapper[4675]: I0124 06:53:23.000850 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:23 crc kubenswrapper[4675]: I0124 06:53:23.000873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:23 crc kubenswrapper[4675]: I0124 06:53:23.001611 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:23 crc kubenswrapper[4675]: I0124 06:53:23.001673 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:23 crc kubenswrapper[4675]: I0124 06:53:23.001698 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:23 crc kubenswrapper[4675]: I0124 06:53:23.747221 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:23 crc kubenswrapper[4675]: I0124 06:53:23.886272 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:18:36.660281581 +0000 UTC Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.008539 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.008600 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.008701 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.008762 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.008519 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c"} Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.008894 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084"} Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.010523 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.010542 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.010586 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.010611 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.010558 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.010695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.010754 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.010790 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.010810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.283047 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:24 crc kubenswrapper[4675]: I0124 06:53:24.887359 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:01:39.283778812 +0000 UTC Jan 24 06:53:25 crc kubenswrapper[4675]: I0124 06:53:25.010578 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:25 crc kubenswrapper[4675]: I0124 06:53:25.010578 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:25 crc kubenswrapper[4675]: I0124 06:53:25.011680 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:25 crc kubenswrapper[4675]: I0124 06:53:25.011732 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:25 crc kubenswrapper[4675]: I0124 06:53:25.011746 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:25 crc kubenswrapper[4675]: I0124 06:53:25.012487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:25 crc kubenswrapper[4675]: I0124 06:53:25.012523 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:25 crc kubenswrapper[4675]: I0124 06:53:25.012536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:25 crc kubenswrapper[4675]: I0124 06:53:25.888096 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:38:33.650816164 +0000 UTC Jan 24 06:53:26 crc kubenswrapper[4675]: I0124 06:53:26.241630 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:26 crc kubenswrapper[4675]: I0124 06:53:26.241955 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:26 crc kubenswrapper[4675]: I0124 06:53:26.243674 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:26 crc kubenswrapper[4675]: I0124 06:53:26.243772 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:26 crc kubenswrapper[4675]: I0124 06:53:26.243797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:26 crc kubenswrapper[4675]: I0124 06:53:26.405671 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:26 crc kubenswrapper[4675]: I0124 06:53:26.405950 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 06:53:26 crc kubenswrapper[4675]: I0124 06:53:26.406066 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:26 crc kubenswrapper[4675]: I0124 06:53:26.407910 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:26 crc kubenswrapper[4675]: I0124 06:53:26.407966 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:26 crc kubenswrapper[4675]: I0124 06:53:26.407980 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:26 crc kubenswrapper[4675]: I0124 06:53:26.888514 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 20:10:19.102722335 +0000 UTC Jan 24 06:53:27 crc kubenswrapper[4675]: I0124 06:53:27.592338 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 24 06:53:27 crc kubenswrapper[4675]: I0124 06:53:27.592621 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:27 crc kubenswrapper[4675]: I0124 06:53:27.594565 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:27 crc kubenswrapper[4675]: I0124 06:53:27.594632 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:27 crc kubenswrapper[4675]: I0124 06:53:27.594653 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:27 crc kubenswrapper[4675]: I0124 06:53:27.888837 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:33:17.166076941 +0000 UTC Jan 24 06:53:28 crc kubenswrapper[4675]: I0124 06:53:28.475994 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:28 crc kubenswrapper[4675]: I0124 06:53:28.476255 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:28 crc kubenswrapper[4675]: I0124 06:53:28.478125 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:28 crc kubenswrapper[4675]: I0124 06:53:28.478196 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:28 crc kubenswrapper[4675]: I0124 06:53:28.478212 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:28 crc kubenswrapper[4675]: I0124 06:53:28.889497 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:25:35.199706287 +0000 UTC Jan 24 06:53:28 crc kubenswrapper[4675]: I0124 06:53:28.924889 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 06:53:28 crc kubenswrapper[4675]: I0124 06:53:28.925151 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:28 crc kubenswrapper[4675]: I0124 06:53:28.926798 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:28 crc kubenswrapper[4675]: I0124 06:53:28.926838 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:28 crc kubenswrapper[4675]: I0124 06:53:28.926852 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:29 crc kubenswrapper[4675]: E0124 06:53:29.005397 4675 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 24 06:53:29 crc kubenswrapper[4675]: I0124 06:53:29.242044 4675 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 06:53:29 crc kubenswrapper[4675]: I0124 06:53:29.242141 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 06:53:29 crc kubenswrapper[4675]: I0124 06:53:29.890047 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:09:01.087174072 +0000 UTC Jan 24 06:53:30 crc kubenswrapper[4675]: I0124 06:53:30.877699 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 24 06:53:30 crc kubenswrapper[4675]: I0124 06:53:30.890819 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:37:44.064726154 +0000 UTC Jan 24 06:53:30 crc kubenswrapper[4675]: E0124 06:53:30.975677 4675 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.446962 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.447076 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.448034 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.448072 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.448081 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.474966 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.475088 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.476076 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.476115 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.476127 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.891975 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:37:44.229706913 +0000 UTC Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.892419 4675 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.892471 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.901627 4675 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 24 06:53:31 crc kubenswrapper[4675]: I0124 06:53:31.901666 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 24 06:53:32 crc kubenswrapper[4675]: I0124 06:53:32.892889 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:04:52.576762774 +0000 UTC Jan 24 06:53:33 crc kubenswrapper[4675]: I0124 06:53:33.893567 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 10:24:15.167601924 +0000 UTC Jan 24 06:53:34 crc kubenswrapper[4675]: I0124 06:53:34.894522 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:26:09.62729778 +0000 UTC Jan 24 06:53:35 crc kubenswrapper[4675]: I0124 06:53:35.245971 4675 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 24 06:53:35 crc kubenswrapper[4675]: I0124 06:53:35.261152 4675 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 24 06:53:35 crc kubenswrapper[4675]: I0124 06:53:35.894882 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:19:04.129347537 +0000 UTC Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.414597 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.414929 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.416864 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.416934 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.416963 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.422011 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:36 crc kubenswrapper[4675]: E0124 06:53:36.878532 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.880491 4675 trace.go:236] Trace[532815127]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 06:53:23.090) (total time: 13789ms): Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[532815127]: ---"Objects listed" error: 13789ms (06:53:36.880) Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[532815127]: [13.789889189s] [13.789889189s] END Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.880516 4675 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.881351 4675 trace.go:236] Trace[1114930983]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 06:53:23.093) (total time: 13787ms): Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[1114930983]: ---"Objects listed" error: 13787ms (06:53:36.881) Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[1114930983]: [13.787954496s] [13.787954496s] END Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.881384 4675 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.881561 4675 trace.go:236] Trace[402083612]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 06:53:23.121) (total time: 13759ms): Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[402083612]: ---"Objects listed" error: 13759ms (06:53:36.881) Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[402083612]: [13.759644769s] [13.759644769s] END Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.881573 4675 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.882205 4675 trace.go:236] Trace[1089945325]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 06:53:22.504) (total time: 14378ms): Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[1089945325]: ---"Objects listed" error: 14378ms (06:53:36.882) Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[1089945325]: [14.378137134s] [14.378137134s] END Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.882219 4675 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 24 06:53:36 crc kubenswrapper[4675]: E0124 06:53:36.883476 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.883735 4675 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.895199 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:17:03.564524987 +0000 UTC Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.921870 4675 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36434->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.921923 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36434->192.168.126.11:17697: read: connection reset by peer" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.049329 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.051577 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b" exitCode=255 Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.051622 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b"} Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.151172 4675 scope.go:117] "RemoveContainer" containerID="e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.216834 4675 csr.go:261] certificate signing request csr-7544c is approved, waiting to be issued Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.301339 4675 csr.go:257] certificate signing request csr-7544c is issued Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.496710 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.500516 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.525124 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.880750 4675 apiserver.go:52] "Watching apiserver" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.889874 4675 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.890417 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-dns/node-resolver-zbs9f","openshift-multus/multus-zx9ns","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.890846 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891472 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891490 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891541 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:37 crc kubenswrapper[4675]: E0124 06:53:37.891552 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891595 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891642 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:37 crc kubenswrapper[4675]: E0124 06:53:37.891662 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891714 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:37 crc kubenswrapper[4675]: E0124 06:53:37.891741 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891940 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zx9ns" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.895781 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:48:10.29631895 +0000 UTC Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.896206 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.896325 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.896323 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.896479 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.896630 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.897863 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.897874 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.897910 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.897992 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.900524 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.900625 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.900789 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.900821 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.900839 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.900935 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.901047 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.907757 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.922144 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.970043 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.986079 4675 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.990857 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.990903 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.990931 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.991197 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.991804 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.991878 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.991934 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.991933 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.991986 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992011 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992264 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992482 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992538 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992559 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992902 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992935 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992976 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992994 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993010 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993319 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993339 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993117 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993268 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993370 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993500 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993536 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993579 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993599 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993792 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993839 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993993 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.994024 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.994662 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.994884 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.994934 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.994952 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.994968 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995070 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995106 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995122 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995138 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995216 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995234 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995258 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995258 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995310 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995316 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995334 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995351 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995369 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995371 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995385 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995451 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995455 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995481 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995466 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995504 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995526 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995551 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995574 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995598 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995620 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995645 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995665 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995688 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995712 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995750 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995772 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995793 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995803 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995813 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995808 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995838 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995860 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995887 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995909 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995931 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995955 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995976 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995999 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996020 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996039 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996060 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996080 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996102 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996115 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996124 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996171 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996176 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996184 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996216 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996242 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996266 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996289 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996311 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996331 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996335 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996377 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996401 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996403 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996436 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996457 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996477 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996495 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996510 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996525 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996542 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996542 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996562 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996595 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996703 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996973 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996563 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997011 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997036 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997062 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997107 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997128 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997151 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997228 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997256 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997281 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997304 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997343 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997366 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997386 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997434 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997461 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997486 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997517 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997540 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997563 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997584 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997604 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997637 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997706 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997770 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997809 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997832 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997866 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997889 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997912 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997946 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997970 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997996 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998013 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998030 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998055 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998072 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998090 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998107 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998124 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998170 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998188 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998205 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998231 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998247 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998268 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998291 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998318 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998341 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998362 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998383 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998403 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998437 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998460 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998482 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998502 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998529 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998666 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998689 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998706 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998746 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998794 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998813 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998844 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998876 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998899 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998921 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998946 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998972 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998993 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999015 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999037 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999055 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999071 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999086 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999103 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999119 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999135 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999151 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999171 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999187 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999223 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999240 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999258 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999282 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999301 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999317 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999333 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999404 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999423 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999441 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999462 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999480 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999495 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999516 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999540 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999561 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999579 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999596 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999651 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999669 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999686 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999711 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999751 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999768 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999786 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999809 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999834 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999857 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999873 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999895 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999920 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999942 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999967 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999992 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000017 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000074 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000095 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-system-cni-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000112 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-cni-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000128 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-os-release\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000144 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61e129ca-c9dc-4375-b373-5eec702744bd-cni-binary-copy\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000159 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-kubelet\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000175 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-multus-certs\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000198 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000214 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-etc-kubernetes\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000229 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-conf-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000243 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/61e129ca-c9dc-4375-b373-5eec702744bd-multus-daemon-config\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000264 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000292 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000311 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2gx8\" (UniqueName: \"kubernetes.io/projected/61e129ca-c9dc-4375-b373-5eec702744bd-kube-api-access-d2gx8\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000331 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000350 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000371 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-hostroot\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000400 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000419 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000442 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-cni-bin\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000470 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000493 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-cni-multus\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000517 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxvpm\" (UniqueName: \"kubernetes.io/projected/581bfd98-ba0e-4e17-812b-088da051ba3c-kube-api-access-wxvpm\") pod \"node-resolver-zbs9f\" (UID: \"581bfd98-ba0e-4e17-812b-088da051ba3c\") " pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000547 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000571 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000592 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-cnibin\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000615 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-socket-dir-parent\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000638 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-netns\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997037 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997140 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997207 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997227 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997243 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997258 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997352 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997446 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997527 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997666 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997683 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998079 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998236 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998261 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998456 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998465 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998631 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998834 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998869 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999004 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999105 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999110 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999266 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999434 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999495 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999573 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999579 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999736 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999826 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999924 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000047 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000159 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000256 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000369 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000419 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000586 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.005043 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.005287 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.005454 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.005611 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.006242 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.006414 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.006547 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.006760 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.006853 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.007010 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.007261 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.007376 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.007529 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.007642 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.007897 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.008007 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.008116 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.008306 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.008419 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.009240 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.009397 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.009527 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.009584 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.009746 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.009832 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010009 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010060 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010283 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010302 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010391 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010429 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010523 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010631 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010780 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010962 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010972 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011126 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011292 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011316 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011450 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011516 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011677 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011888 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011932 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.012120 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.012271 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.012338 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.012558 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.012586 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.013054 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.013122 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.013337 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000661 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-k8s-cni-cncf-io\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.013635 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.014188 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.014382 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.014578 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.014898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.017732 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.017888 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.013466 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018135 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018184 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018185 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018207 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018226 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018252 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/581bfd98-ba0e-4e17-812b-088da051ba3c-hosts-file\") pod \"node-resolver-zbs9f\" (UID: \"581bfd98-ba0e-4e17-812b-088da051ba3c\") " pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018296 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.019940 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020023 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020048 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020083 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020130 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020328 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020491 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020625 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020797 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021073 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021226 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021351 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021407 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021477 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021574 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021702 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021844 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.022067 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.022297 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.022435 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.022561 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.022969 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.036439 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.036914 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.037370 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.038198 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.041904 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.042607 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.042769 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.042950 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.042976 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.043014 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.043041 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.043179 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.043364 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.044684 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.046496 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.037580 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.046939 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047161 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047222 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047354 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047469 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047485 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047612 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047751 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.048498 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.048879 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049021 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049044 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049077 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049249 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049392 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049449 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049582 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049811 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049826 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.050527 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.050581 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:38.55056764 +0000 UTC m=+19.846672863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.051075 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.051202 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.051748 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.055840 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056099 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056123 4675 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056140 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056152 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056163 4675 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056172 4675 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056181 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056191 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056203 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056217 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056231 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056243 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056269 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056283 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056296 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056309 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056323 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056336 4675 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056349 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056362 4675 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056374 4675 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056386 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056399 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056411 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056423 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056436 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056448 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.056449 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056461 4675 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.056551 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:38.556507312 +0000 UTC m=+19.852612605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056685 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056699 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056728 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056740 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056830 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056843 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056856 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056868 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056879 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056890 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056901 4675 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056912 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056923 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056934 4675 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056944 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056955 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056966 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056977 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056990 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057003 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057015 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057024 4675 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057035 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057048 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057058 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057069 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057079 4675 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057089 4675 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057099 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057110 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057121 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057132 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057141 4675 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057151 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057161 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057171 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057182 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057193 4675 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057205 4675 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057216 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057227 4675 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057237 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057246 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057257 4675 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057267 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057276 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057287 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057296 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057307 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057316 4675 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057326 4675 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057335 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057344 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057353 4675 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057363 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057374 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057384 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057394 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057405 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057415 4675 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057427 4675 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057437 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057449 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.057472 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:53:38.557454282 +0000 UTC m=+19.853559505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057482 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057494 4675 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057504 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057515 4675 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057525 4675 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057535 4675 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057545 4675 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057555 4675 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057565 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057575 4675 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057585 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061090 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061113 4675 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061123 4675 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061134 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061150 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061159 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061209 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061218 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061239 4675 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061248 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061257 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061269 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061278 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061368 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061376 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061388 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061396 4675 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061404 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061412 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061422 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061430 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061523 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.051422 4675 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.068639 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.069319 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.069562 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.069594 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.069608 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.070984 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.071189 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:38.571156126 +0000 UTC m=+19.867261349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.073677 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.075156 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.075178 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.075196 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.075257 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:38.575235127 +0000 UTC m=+19.871340350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.079752 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.082071 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.084802 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.086300 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.092946 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c"} Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.092999 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.093383 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.095990 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.101117 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.107094 4675 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.107835 4675 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.122586 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.135299 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.148400 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.160309 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161682 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-nqn5c"] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161811 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-cni-multus\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161843 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxvpm\" (UniqueName: \"kubernetes.io/projected/581bfd98-ba0e-4e17-812b-088da051ba3c-kube-api-access-wxvpm\") pod \"node-resolver-zbs9f\" (UID: \"581bfd98-ba0e-4e17-812b-088da051ba3c\") " pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161859 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-cnibin\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161875 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-socket-dir-parent\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161891 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-k8s-cni-cncf-io\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161905 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-netns\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161927 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161942 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/581bfd98-ba0e-4e17-812b-088da051ba3c-hosts-file\") pod \"node-resolver-zbs9f\" (UID: \"581bfd98-ba0e-4e17-812b-088da051ba3c\") " pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161956 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61e129ca-c9dc-4375-b373-5eec702744bd-cni-binary-copy\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161970 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-kubelet\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161986 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-multus-certs\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162004 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162014 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162141 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-system-cni-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162179 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-cni-multus\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162018 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-system-cni-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162282 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-cni-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162299 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-os-release\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162314 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-etc-kubernetes\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162350 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-conf-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162365 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/61e129ca-c9dc-4375-b373-5eec702744bd-multus-daemon-config\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162394 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2gx8\" (UniqueName: \"kubernetes.io/projected/61e129ca-c9dc-4375-b373-5eec702744bd-kube-api-access-d2gx8\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162407 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-cnibin\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162423 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-hostroot\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162449 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-cni-bin\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162500 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162511 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-socket-dir-parent\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162514 4675 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162538 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162547 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162548 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-cni-bin\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162556 4675 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162564 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162579 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162593 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162598 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-netns\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162606 4675 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162619 4675 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162629 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162582 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-k8s-cni-cncf-io\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162618 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163152 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163168 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61e129ca-c9dc-4375-b373-5eec702744bd-cni-binary-copy\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163183 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/581bfd98-ba0e-4e17-812b-088da051ba3c-hosts-file\") pod \"node-resolver-zbs9f\" (UID: \"581bfd98-ba0e-4e17-812b-088da051ba3c\") " pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163308 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-hostroot\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163331 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-multus-certs\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163344 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-kubelet\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163364 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163398 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-os-release\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163527 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-cni-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163547 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/61e129ca-c9dc-4375-b373-5eec702744bd-multus-daemon-config\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163552 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-etc-kubernetes\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163560 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-conf-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163603 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163615 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163624 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163633 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163641 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163660 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163728 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163738 4675 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163747 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163756 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163764 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163773 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163783 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163791 4675 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163800 4675 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163808 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163817 4675 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163825 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163834 4675 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163843 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163852 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163860 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163869 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163877 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163884 4675 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163892 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163901 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163924 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163933 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163941 4675 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163949 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163958 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163966 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163974 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163981 4675 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163989 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163997 4675 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164006 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164015 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164041 4675 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164059 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164068 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164077 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164085 4675 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164094 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164102 4675 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164111 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164119 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164128 4675 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164136 4675 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164154 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164163 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164171 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164180 4675 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164188 4675 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164197 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164205 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164213 4675 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.165080 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.165322 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.165504 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.166006 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vsnzs"] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.166605 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.166818 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.172566 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-797q5"] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.172980 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.173076 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.172991 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.173154 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.173979 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.178526 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.178642 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.178713 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.178930 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.179257 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.185625 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.185627 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.189890 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2gx8\" (UniqueName: \"kubernetes.io/projected/61e129ca-c9dc-4375-b373-5eec702744bd-kube-api-access-d2gx8\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.191516 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxvpm\" (UniqueName: \"kubernetes.io/projected/581bfd98-ba0e-4e17-812b-088da051ba3c-kube-api-access-wxvpm\") pod \"node-resolver-zbs9f\" (UID: \"581bfd98-ba0e-4e17-812b-088da051ba3c\") " pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.196101 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.204454 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.215996 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.225640 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.237393 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.243867 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.235605 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268023 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268061 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-node-log\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268078 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-rootfs\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268092 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-kubelet\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268105 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-netns\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268119 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-systemd\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268133 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-config\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268147 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-env-overrides\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268161 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-slash\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268176 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-ovn-kubernetes\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268191 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-script-lib\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268217 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-etc-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268237 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-os-release\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268259 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-proxy-tls\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268273 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-systemd-units\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268290 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-var-lib-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268311 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268326 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-cnibin\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268339 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx56z\" (UniqueName: \"kubernetes.io/projected/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-kube-api-access-lx56z\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268354 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-ovn\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268368 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/562cfea2-dd3d-4729-8577-10f3a20ee031-cni-binary-copy\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268383 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-tuning-conf-dir\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268398 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-mcd-auth-proxy-config\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268411 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-bin\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268425 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovn-node-metrics-cert\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268440 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/562cfea2-dd3d-4729-8577-10f3a20ee031-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268453 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-log-socket\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268469 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-system-cni-dir\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268500 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvsws\" (UniqueName: \"kubernetes.io/projected/562cfea2-dd3d-4729-8577-10f3a20ee031-kube-api-access-vvsws\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268528 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-netd\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268545 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qgbz\" (UniqueName: \"kubernetes.io/projected/50a4333f-fd95-41a0-9ac8-4c21f9000870-kube-api-access-4qgbz\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.275641 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.306340 4675 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-24 06:48:37 +0000 UTC, rotation deadline is 2026-10-15 07:46:54.852605769 +0000 UTC Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.306390 4675 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6336h53m16.546217601s for next certificate rotation Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.307155 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.332772 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.357980 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370086 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-proxy-tls\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370131 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-systemd-units\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370150 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-var-lib-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370183 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370200 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-ovn\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370222 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-cnibin\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370241 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx56z\" (UniqueName: \"kubernetes.io/projected/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-kube-api-access-lx56z\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370260 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovn-node-metrics-cert\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370294 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/562cfea2-dd3d-4729-8577-10f3a20ee031-cni-binary-copy\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370311 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-tuning-conf-dir\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370330 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-mcd-auth-proxy-config\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370349 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-bin\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370371 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/562cfea2-dd3d-4729-8577-10f3a20ee031-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370390 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-log-socket\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370409 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-system-cni-dir\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370428 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvsws\" (UniqueName: \"kubernetes.io/projected/562cfea2-dd3d-4729-8577-10f3a20ee031-kube-api-access-vvsws\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370452 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-netd\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370470 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qgbz\" (UniqueName: \"kubernetes.io/projected/50a4333f-fd95-41a0-9ac8-4c21f9000870-kube-api-access-4qgbz\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370485 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370499 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-node-log\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370513 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-systemd\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370529 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-config\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370552 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-env-overrides\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370575 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-rootfs\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370594 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-kubelet\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370609 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-netns\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370627 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-script-lib\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370641 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-slash\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370655 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-ovn-kubernetes\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370677 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-etc-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370700 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-os-release\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370812 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-os-release\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371116 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-systemd-units\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371148 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-var-lib-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371169 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371189 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-ovn\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371211 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-cnibin\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371436 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-bin\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371911 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-tuning-conf-dir\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372033 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/562cfea2-dd3d-4729-8577-10f3a20ee031-cni-binary-copy\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372082 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-log-socket\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372523 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/562cfea2-dd3d-4729-8577-10f3a20ee031-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372555 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-mcd-auth-proxy-config\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372600 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-system-cni-dir\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372738 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372886 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372918 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-netd\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373030 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-env-overrides\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373081 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-node-log\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373115 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-systemd\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373478 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-script-lib\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373518 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-rootfs\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373544 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-kubelet\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373565 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-netns\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373589 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-ovn-kubernetes\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373611 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-slash\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373635 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-etc-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.375038 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovn-node-metrics-cert\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.375200 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-config\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.377086 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-proxy-tls\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.393176 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qgbz\" (UniqueName: \"kubernetes.io/projected/50a4333f-fd95-41a0-9ac8-4c21f9000870-kube-api-access-4qgbz\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.401010 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.401130 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvsws\" (UniqueName: \"kubernetes.io/projected/562cfea2-dd3d-4729-8577-10f3a20ee031-kube-api-access-vvsws\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.411327 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx56z\" (UniqueName: \"kubernetes.io/projected/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-kube-api-access-lx56z\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.421012 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.450274 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.476374 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.483439 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.490641 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.538688 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.558119 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.581773 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.581846 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.581869 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.581919 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.581941 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:53:39.581916274 +0000 UTC m=+20.878021497 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.581966 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:39.581959414 +0000 UTC m=+20.878064637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582000 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582012 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582021 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.582034 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582048 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:39.582035526 +0000 UTC m=+20.878140749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.582090 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582307 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582326 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582336 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582388 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:39.582379433 +0000 UTC m=+20.878484656 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582436 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582481 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:39.582472285 +0000 UTC m=+20.878577508 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.592121 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.629605 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.645195 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.662752 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.816491 4675 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817155 4675 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817220 4675 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817246 4675 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817270 4675 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817304 4675 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817330 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817541 4675 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817681 4675 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817834 4675 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817964 4675 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818092 4675 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818205 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818240 4675 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818264 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818408 4675 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818533 4675 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817733 4675 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817968 4675 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818853 4675 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818129 4675 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.818018 4675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.129.56.68:37814->38.129.56.68:6443: use of closed network connection" event="&Event{ObjectMeta:{machine-config-daemon-nqn5c.188d983d909cb6a6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-nqn5c,UID:94e792a6-d8c0-45f7-b7b0-08616d1a9dd5,APIVersion:v1,ResourceVersion:26547,FieldPath:spec.containers{kube-rbac-proxy},},Reason:Created,Message:Created container kube-rbac-proxy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 06:53:38.813089446 +0000 UTC m=+20.109194669,LastTimestamp:2026-01-24 06:53:38.813089446 +0000 UTC m=+20.109194669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817755 4675 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817772 4675 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818148 4675 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818163 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818177 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818191 4675 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818818 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818834 4675 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.819396 4675 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.819489 4675 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.819445 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.896367 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 00:09:13.150264279 +0000 UTC Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.946904 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.947584 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.949092 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.950003 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.951213 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.951832 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.952428 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.953375 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.954051 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.954997 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.955528 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.956278 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.956818 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.957299 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.958192 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.959182 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.961104 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.962051 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.962478 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.964990 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.965610 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.966126 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.967178 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.967647 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.969675 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.970147 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.971187 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.971936 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.972844 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.973366 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.974421 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.974941 4675 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.975035 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.975695 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.977186 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.977675 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.978082 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.979588 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.980679 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.981355 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.982384 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.983103 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.984074 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.984670 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.985743 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.986777 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.987312 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.988296 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.988835 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.990453 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.990934 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.991412 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.992302 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.993022 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.994070 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.994249 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.994830 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.005847 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.018263 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.032073 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.045892 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.063752 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.075556 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.086514 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.095992 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.096089 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.096099 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cc87a183d9b06b330deaaedc509c7010026936da66d58f04da73f96a46a370ae"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.097504 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.097532 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"10db14f503b4ce765839227d5b3f9b598a73814dbbab4916498b3f3b230881a1"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.098863 4675 generic.go:334] "Generic (PLEG): container finished" podID="562cfea2-dd3d-4729-8577-10f3a20ee031" containerID="5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353" exitCode=0 Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.098892 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerDied","Data":"5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.098958 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerStarted","Data":"282291569b64ef33054937e21b774e26ae0666154dc0cc7efddcd09c997cddb9"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.100320 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.100366 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.100376 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"a7c0234af696ea18ae15eb8998233c8dcb973935db2a39b3ba42ed3aed5468bb"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.101608 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" exitCode=0 Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.101663 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.101679 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"62a8cadede7a21145e681044886ca9386d55c6d70c06dc737ae9eedf6acff8c9"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.103072 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerStarted","Data":"6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.103098 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerStarted","Data":"291b9fb73cddf89d9377a7bcbf1dfa6efc6352bc4583ab62951ee27d655d6b90"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.105084 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zbs9f" event={"ID":"581bfd98-ba0e-4e17-812b-088da051ba3c","Type":"ContainerStarted","Data":"2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.105128 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zbs9f" event={"ID":"581bfd98-ba0e-4e17-812b-088da051ba3c","Type":"ContainerStarted","Data":"e8fe77af85415ab9b3e86b06b0a7af3731b532aa2d9a6e4c0727fde92796537a"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.105856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"059c896bd8e3e3267509e8f0bdd993f6030629cfeccd00cfe994bf700cf2fe59"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.110553 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.125326 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.153197 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.168709 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.187270 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.207883 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.234269 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.251258 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.268861 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.282480 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.305446 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.358104 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.382003 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.433326 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.454064 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.474046 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.593807 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594081 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:53:41.59404717 +0000 UTC m=+22.890152423 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.594336 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.594470 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594496 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594670 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594691 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594767 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:41.594752865 +0000 UTC m=+22.890858178 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594541 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594803 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594823 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:41.594811936 +0000 UTC m=+22.890917289 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594868 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:41.594852057 +0000 UTC m=+22.890957280 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.594628 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.595205 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.595401 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.595439 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.595452 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.595513 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:41.595493702 +0000 UTC m=+22.891598925 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.713216 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.723915 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.736569 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.742236 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.747101 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.752601 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.760333 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.831940 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.833476 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.834740 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.896595 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:36:36.172930099 +0000 UTC Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.911962 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.912038 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.941583 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.941674 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.941648 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.941859 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.942020 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.942147 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.973018 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.998091 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.003969 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.059104 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.076243 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.083629 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.085179 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.085214 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.085223 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.085312 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.094085 4675 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.094254 4675 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.095052 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.095151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.095214 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.095278 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.095333 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.097260 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.111388 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.111568 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.111642 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.111700 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.111782 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.113754 4675 generic.go:334] "Generic (PLEG): container finished" podID="562cfea2-dd3d-4729-8577-10f3a20ee031" containerID="950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827" exitCode=0 Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.113854 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerDied","Data":"950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827"} Jan 24 06:53:40 crc kubenswrapper[4675]: E0124 06:53:40.114789 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.121129 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.121164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.121175 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.121189 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.121198 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.121452 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.130871 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 24 06:53:40 crc kubenswrapper[4675]: E0124 06:53:40.136102 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.140612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.140653 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.140665 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.140682 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.140693 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.140756 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.142134 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.147861 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: E0124 06:53:40.156817 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.161997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.162026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.162034 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.162048 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.162056 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.167813 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.169213 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: E0124 06:53:40.178173 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.182069 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.182101 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.182110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.182125 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.182142 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.188658 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.191606 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7rtdz"] Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.191917 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.193707 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.193932 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.194163 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.195349 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: E0124 06:53:40.203350 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: E0124 06:53:40.203477 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.208212 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.212887 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.212915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.212925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.212944 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.212955 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.227347 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.240945 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.244138 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.244326 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.251387 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.262252 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.274060 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.288819 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.289072 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.295258 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.301758 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.303537 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hhd5\" (UniqueName: \"kubernetes.io/projected/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-kube-api-access-2hhd5\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.303603 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-host\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.303766 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-serviceca\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.311847 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.314324 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.315180 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.315204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.315215 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.315228 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.315237 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.326122 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.349939 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.362188 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.376940 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.400497 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.405045 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-host\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.405086 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-serviceca\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.405193 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-host\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.405989 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-serviceca\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.406029 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hhd5\" (UniqueName: \"kubernetes.io/projected/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-kube-api-access-2hhd5\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.417563 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.417602 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.417613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.417625 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.417635 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.418457 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.418586 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.435241 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.435653 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hhd5\" (UniqueName: \"kubernetes.io/projected/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-kube-api-access-2hhd5\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.473890 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.485227 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.497351 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.506954 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.513114 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.520842 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.520873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.520881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.520894 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.520911 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.521950 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.535127 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.571442 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.612073 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.623829 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.623865 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.623873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.623888 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.623896 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.651953 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.691344 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.726037 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.726071 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.726080 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.726093 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.726101 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.828288 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.828322 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.828333 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.828366 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.828375 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.896853 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:11:22.728544044 +0000 UTC Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.931131 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.931179 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.931190 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.931206 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.931216 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.034089 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.034163 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.034179 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.034204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.034220 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.123084 4675 generic.go:334] "Generic (PLEG): container finished" podID="562cfea2-dd3d-4729-8577-10f3a20ee031" containerID="3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9" exitCode=0 Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.123081 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerDied","Data":"3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.125753 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7rtdz" event={"ID":"ac8e7205-a99a-4174-bd7c-5ddaa11f9916","Type":"ContainerStarted","Data":"7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.125818 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7rtdz" event={"ID":"ac8e7205-a99a-4174-bd7c-5ddaa11f9916","Type":"ContainerStarted","Data":"f9c1a2d884f4a8d0dc573c5639915dc4adb12c2b635b2402a8ed4aadbe8e328e"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.126879 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.130460 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.136550 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.136587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.136598 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.136614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.136631 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.138234 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.154053 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.164965 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.186126 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.199063 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.212192 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.226349 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.237621 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.239539 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.239572 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.239584 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.239600 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.239612 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.257042 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.267478 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.280518 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.291699 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.302367 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.314429 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.329335 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.338432 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.341780 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.341804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.341814 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.341827 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.341837 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.370413 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.409567 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.444456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.444517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.444529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.444547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.444561 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.452245 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.490151 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.497227 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.511367 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.533118 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.546293 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.546335 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.546348 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.546365 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.546378 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.556871 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.592127 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.617410 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.617495 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.617547 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.617609 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.617658 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617784 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617827 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617842 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617901 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:45.617882782 +0000 UTC m=+26.913988015 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617789 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617960 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617984 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617798 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617788 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.618059 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:45.618037766 +0000 UTC m=+26.914143039 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.618086 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:45.618074807 +0000 UTC m=+26.914180070 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.618107 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:45.618096707 +0000 UTC m=+26.914201970 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.618165 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:53:45.618152858 +0000 UTC m=+26.914258111 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.630703 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.649120 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.649157 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.649167 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.649183 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.649194 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.672980 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.713225 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.751019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.751056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.751064 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.751077 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.751086 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.752546 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.791707 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.843164 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.853554 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.853601 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.853612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.853630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.853642 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.871647 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.897756 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:34:38.660105757 +0000 UTC Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.909581 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.942279 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.942339 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.942441 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.942497 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.942616 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.942706 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.956097 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.956130 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.956143 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.956159 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.956171 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.957145 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.989652 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.031502 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.058295 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.058337 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.058346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.058361 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.058370 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.070357 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.111698 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.135570 4675 generic.go:334] "Generic (PLEG): container finished" podID="562cfea2-dd3d-4729-8577-10f3a20ee031" containerID="e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345" exitCode=0 Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.135666 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerDied","Data":"e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.154753 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.159984 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.160032 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.160043 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.160058 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.160069 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.195248 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.232121 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.264094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.264148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.264158 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.264172 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.264181 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.275827 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.310919 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.350700 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.366809 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.366851 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.366863 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.366880 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.366892 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.393140 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.434070 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.468550 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.468598 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.468610 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.468628 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.468640 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.494861 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.564511 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.570298 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.570331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.570340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.570355 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.570371 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.581022 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.594016 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.635002 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.671088 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.672287 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.672388 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.672511 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.672599 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.672688 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.710751 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.750432 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.778376 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.778440 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.778450 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.778465 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.778476 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.797278 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.831833 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.870477 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.879966 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.880006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.880014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.880029 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.880039 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.898653 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:47:17.805265116 +0000 UTC Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.912182 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.952652 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.982228 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.982267 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.982276 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.982291 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.982300 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.994086 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.028669 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.084483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.084529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.084539 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.084554 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.084567 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.141564 4675 generic.go:334] "Generic (PLEG): container finished" podID="562cfea2-dd3d-4729-8577-10f3a20ee031" containerID="fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86" exitCode=0 Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.141629 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerDied","Data":"fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.147335 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.160173 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.182950 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.187297 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.187342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.187352 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.187370 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.187382 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.198412 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.212446 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.232183 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.270980 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.289044 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.289115 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.289134 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.289160 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.289179 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.310576 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.351479 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.391429 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.391463 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.391471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.391484 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.391493 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.396413 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.435082 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.470537 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.494804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.494839 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.494847 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.494860 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.494869 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.515743 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.557901 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.597700 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.597767 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.597780 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.597799 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.597815 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.599009 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.636710 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.700955 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.700996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.701005 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.701024 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.701033 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.803762 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.803828 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.803841 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.803861 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.803875 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.899542 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:54:05.932109825 +0000 UTC Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.906436 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.906482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.906496 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.906518 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.906534 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.942309 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.942417 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.942443 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:43 crc kubenswrapper[4675]: E0124 06:53:43.942540 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:43 crc kubenswrapper[4675]: E0124 06:53:43.942658 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:43 crc kubenswrapper[4675]: E0124 06:53:43.942751 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.008951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.009000 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.009011 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.009032 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.009045 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.111420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.111460 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.111471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.111486 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.111499 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.157907 4675 generic.go:334] "Generic (PLEG): container finished" podID="562cfea2-dd3d-4729-8577-10f3a20ee031" containerID="ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb" exitCode=0 Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.157964 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerDied","Data":"ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.174661 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.192607 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.203964 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.214124 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.214168 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.214181 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.214197 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.214212 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.223143 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.236942 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.247692 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.259245 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.271092 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.283858 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.298123 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.312477 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.315630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.315664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.315675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.315690 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.315701 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.329619 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.344052 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.356460 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.367922 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.417569 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.417603 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.417613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.417628 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.417637 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.519380 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.519416 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.519426 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.519443 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.519456 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.622637 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.622683 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.622695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.622711 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.622747 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.725805 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.726054 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.726066 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.726083 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.726100 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.829406 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.829448 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.829461 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.829478 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.829490 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.900642 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:01:39.536102549 +0000 UTC Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.931979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.932026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.932039 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.932055 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.932066 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.034482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.034521 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.034530 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.034544 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.034553 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.136879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.136922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.136936 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.136957 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.136972 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.165426 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerStarted","Data":"719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.170403 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.170707 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.179364 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.193185 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.193668 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.214486 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.228537 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.239573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.239600 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.239611 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.239625 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.239635 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.241263 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.257549 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.269145 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.282979 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.295245 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.308390 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.326668 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.339069 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.341837 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.341871 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.341884 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.341905 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.341935 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.351414 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.365462 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.377745 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.389096 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.401829 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.416062 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.426956 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.439277 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.444483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.444517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.444528 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.444545 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.444556 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.450920 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.468776 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.482184 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.493312 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.510097 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.523006 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.536880 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.547845 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.547881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.547893 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.547911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.547924 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.549974 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.569417 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.582293 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.649834 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.649879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.649891 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.649910 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.649922 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.655547 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.655641 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.655758 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:53:53.6557051 +0000 UTC m=+34.951810323 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.655764 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.655879 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.655830 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.655941 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.655966 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.655910 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656025 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656040 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656041 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656050 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656066 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:53.656058238 +0000 UTC m=+34.952163461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.655920 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656096 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:53.656081109 +0000 UTC m=+34.952186332 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656110 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:53.656105529 +0000 UTC m=+34.952210752 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656152 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:53.656123699 +0000 UTC m=+34.952228922 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.752169 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.752216 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.752228 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.752248 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.752260 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.855164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.855213 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.855229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.855250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.855267 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.900991 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 08:57:50.771972406 +0000 UTC Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.942491 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.942516 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.942547 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.942668 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.942859 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.942967 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.958101 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.958180 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.958204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.958229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.958246 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.060406 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.060435 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.060442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.060475 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.060486 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.162824 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.162890 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.162915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.162945 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.162967 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.172970 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.174527 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.200142 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.217430 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.231320 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.245022 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.260824 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.265774 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.265825 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.265841 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.265865 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.265883 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.287381 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.298596 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.315309 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.328108 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.340126 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.354830 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.370360 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.370388 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.370398 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.370413 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.370423 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.375476 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.385678 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.404625 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.418764 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.431119 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.472384 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.472424 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.472435 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.472452 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.472464 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.626306 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.626531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.626645 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.626785 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.626875 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.728853 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.728904 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.728917 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.728934 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.728948 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.830988 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.831032 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.831042 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.831059 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.831069 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.901747 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:54:31.2999394 +0000 UTC Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.933033 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.933157 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.933215 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.933292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.933362 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.035856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.035926 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.035947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.035972 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.035989 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.137955 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.138012 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.138027 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.138049 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.138065 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.175564 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.240482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.240533 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.240548 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.240568 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.240583 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.343023 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.343057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.343067 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.343080 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.343094 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.445510 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.445842 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.445855 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.445870 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.445880 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.549540 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.549585 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.549597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.549617 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.549630 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.653563 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.654128 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.654272 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.654418 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.654547 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.758280 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.758346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.758364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.758386 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.758409 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.861685 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.861763 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.861774 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.861810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.861824 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.901870 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:07:36.110080088 +0000 UTC Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.942375 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.942446 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.942397 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:47 crc kubenswrapper[4675]: E0124 06:53:47.942639 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:47 crc kubenswrapper[4675]: E0124 06:53:47.942858 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:47 crc kubenswrapper[4675]: E0124 06:53:47.943031 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.964875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.964943 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.964956 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.964977 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.964989 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.068245 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.068302 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.068315 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.068339 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.068359 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.171914 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.171965 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.171983 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.172005 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.172022 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.185354 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/0.log" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.188644 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670" exitCode=1 Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.188690 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.189583 4675 scope.go:117] "RemoveContainer" containerID="bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.210627 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.226359 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.257841 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:47Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI0124 06:53:47.404138 5856 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404167 5856 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404377 5856 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404576 5856 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404709 5856 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405033 5856 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405082 5856 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.275579 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.275635 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.275653 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.275679 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.275698 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.276621 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.297625 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.314938 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.327049 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.343119 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.359442 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.376778 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.378424 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.378458 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.378470 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.378486 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.378497 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.390377 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.406805 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.450360 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.466029 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.480425 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.481048 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.481304 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.481324 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.481345 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.481362 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.487881 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.504168 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.518694 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.533782 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.561745 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:47Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI0124 06:53:47.404138 5856 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404167 5856 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404377 5856 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404576 5856 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404709 5856 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405033 5856 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405082 5856 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.579496 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.584095 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.584154 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.584180 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.584210 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.584234 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.596384 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.615525 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.627408 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.654433 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.671815 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.687970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.687998 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.688009 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.688025 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.688036 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.696305 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.715997 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.730944 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.745415 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.758313 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.790440 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.790492 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.790505 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.790554 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.790568 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.892947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.892986 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.892997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.893014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.893026 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.902257 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:44:58.193951048 +0000 UTC Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.960173 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.972086 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.982536 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.994920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.994967 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.994985 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.995006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.995020 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.010024 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:47Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI0124 06:53:47.404138 5856 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404167 5856 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404377 5856 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404576 5856 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404709 5856 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405033 5856 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405082 5856 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.023054 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.033238 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.047509 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.060548 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.074657 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.090479 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.099185 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.099236 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.099249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.099272 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.099287 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.103554 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.117912 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.132579 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.158917 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.172472 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.193893 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/0.log" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.197036 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.197122 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.201676 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.201805 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.201823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.201846 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.201865 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.213117 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.225216 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.237775 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.269168 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:47Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI0124 06:53:47.404138 5856 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404167 5856 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404377 5856 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404576 5856 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404709 5856 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405033 5856 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405082 5856 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.281809 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.294836 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.306376 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.320975 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.326794 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.326831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.326840 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.326855 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.326865 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.331619 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.348961 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.362415 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.398620 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.480741 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.481728 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.481767 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.481777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.481794 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.481815 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.498576 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.511498 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.583958 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.584050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.584063 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.584086 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.584099 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.687562 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.687630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.687649 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.687765 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.687791 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.790795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.790869 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.790888 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.790920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.790937 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.895640 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.895695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.895710 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.895755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.895773 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.902900 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:33:58.182721246 +0000 UTC Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.941624 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.941696 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.941647 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:49 crc kubenswrapper[4675]: E0124 06:53:49.941847 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:49 crc kubenswrapper[4675]: E0124 06:53:49.942092 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:49 crc kubenswrapper[4675]: E0124 06:53:49.942245 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.998569 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.998626 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.998641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.998664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.998679 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.101694 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.101805 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.101847 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.101920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.101938 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.202358 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/1.log" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.203044 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/0.log" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.204175 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.204222 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.204236 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.204259 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.204274 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.206820 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569" exitCode=1 Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.206875 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.206938 4675 scope.go:117] "RemoveContainer" containerID="bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.207624 4675 scope.go:117] "RemoveContainer" containerID="8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569" Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.207796 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.221911 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.246305 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.261334 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.273235 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.286368 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.299434 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.307056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.307107 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.307119 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.307139 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.307156 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.317122 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.333972 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.366130 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.385627 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.400247 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.410708 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.410790 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.410803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.410820 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.410834 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.417381 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.436076 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.448296 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.478955 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:47Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI0124 06:53:47.404138 5856 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404167 5856 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404377 5856 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404576 5856 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404709 5856 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405033 5856 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405082 5856 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.513655 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.513935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.514047 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.514135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.514217 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.518467 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.518533 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.518551 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.518580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.518599 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.535474 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.545512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.545834 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.545977 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.546076 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.546154 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.560412 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.564923 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.564974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.564985 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.565003 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.565015 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.580428 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.585025 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.585057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.585089 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.585129 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.585140 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.597335 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.602050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.602099 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.602114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.602135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.602150 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.616295 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.616452 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.617941 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.617981 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.617992 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.618010 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.618020 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.720508 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.720555 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.720564 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.720580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.720591 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.823126 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.823177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.823194 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.823217 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.823234 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.903464 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:39:58.241168735 +0000 UTC Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.926070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.926110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.926122 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.926140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.926153 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.000419 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8"] Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.001240 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.003323 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.005687 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.021982 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.028672 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.028705 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.028712 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.028739 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.028748 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.038930 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.052838 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.073541 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:47Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI0124 06:53:47.404138 5856 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404167 5856 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404377 5856 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404576 5856 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404709 5856 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405033 5856 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405082 5856 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.084879 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.096464 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.116324 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.127960 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.130858 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.130894 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.130903 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.130918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.130927 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.139352 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.150237 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.160617 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.172199 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.184900 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.194441 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d143943f-5bfe-4381-b997-c99ce1ccf80b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.194514 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84ltl\" (UniqueName: \"kubernetes.io/projected/d143943f-5bfe-4381-b997-c99ce1ccf80b-kube-api-access-84ltl\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.194545 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d143943f-5bfe-4381-b997-c99ce1ccf80b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.194596 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d143943f-5bfe-4381-b997-c99ce1ccf80b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.202403 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.212958 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/1.log" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.218123 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.218623 4675 scope.go:117] "RemoveContainer" containerID="8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569" Jan 24 06:53:51 crc kubenswrapper[4675]: E0124 06:53:51.219070 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.231853 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.233734 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.233797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.233810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.233836 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.233849 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.243695 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.254782 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.274455 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.284874 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.295412 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d143943f-5bfe-4381-b997-c99ce1ccf80b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.295587 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d143943f-5bfe-4381-b997-c99ce1ccf80b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.295706 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d143943f-5bfe-4381-b997-c99ce1ccf80b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.295842 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84ltl\" (UniqueName: \"kubernetes.io/projected/d143943f-5bfe-4381-b997-c99ce1ccf80b-kube-api-access-84ltl\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.296455 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d143943f-5bfe-4381-b997-c99ce1ccf80b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.297519 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d143943f-5bfe-4381-b997-c99ce1ccf80b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.301165 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.302026 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d143943f-5bfe-4381-b997-c99ce1ccf80b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.311366 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.315367 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84ltl\" (UniqueName: \"kubernetes.io/projected/d143943f-5bfe-4381-b997-c99ce1ccf80b-kube-api-access-84ltl\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.320302 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.323821 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: W0124 06:53:51.335408 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd143943f_5bfe_4381_b997_c99ce1ccf80b.slice/crio-eed62cb7d58f4b2db8ee812328df3f90792e1ca39df25f44bdad5e7f510df5b3 WatchSource:0}: Error finding container eed62cb7d58f4b2db8ee812328df3f90792e1ca39df25f44bdad5e7f510df5b3: Status 404 returned error can't find the container with id eed62cb7d58f4b2db8ee812328df3f90792e1ca39df25f44bdad5e7f510df5b3 Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.335508 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.336156 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.336176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.336185 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.336199 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.336207 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.349930 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.360281 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.373047 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.392118 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.414089 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.425563 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.437590 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.440639 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.440674 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.440682 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.440696 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.440705 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.449262 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.542664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.542698 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.542709 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.542748 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.542759 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.645451 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.645518 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.645542 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.645572 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.645598 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.748374 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.748846 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.748865 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.748889 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.748908 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.851757 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.851797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.851807 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.851821 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.851831 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.904420 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 01:48:08.326923532 +0000 UTC Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.941845 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.941898 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:51 crc kubenswrapper[4675]: E0124 06:53:51.941969 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.941848 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:51 crc kubenswrapper[4675]: E0124 06:53:51.942059 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:51 crc kubenswrapper[4675]: E0124 06:53:51.942105 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.954780 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.954809 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.954820 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.954834 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.954846 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.056429 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.056471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.056483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.056499 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.056510 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.117533 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-8mdgj"] Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.119025 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: E0124 06:53:52.119127 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.134320 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.158704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.158746 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.158755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.158768 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.158777 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.173327 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.204010 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.204105 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtbqr\" (UniqueName: \"kubernetes.io/projected/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-kube-api-access-wtbqr\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.213497 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.221522 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" event={"ID":"d143943f-5bfe-4381-b997-c99ce1ccf80b","Type":"ContainerStarted","Data":"63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.221818 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" event={"ID":"d143943f-5bfe-4381-b997-c99ce1ccf80b","Type":"ContainerStarted","Data":"04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.221896 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" event={"ID":"d143943f-5bfe-4381-b997-c99ce1ccf80b","Type":"ContainerStarted","Data":"eed62cb7d58f4b2db8ee812328df3f90792e1ca39df25f44bdad5e7f510df5b3"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.226596 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.235823 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.245572 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.256274 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.260339 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.260387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.260399 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.260415 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.260426 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.268905 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.279852 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.289276 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.301270 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.304711 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtbqr\" (UniqueName: \"kubernetes.io/projected/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-kube-api-access-wtbqr\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.304783 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: E0124 06:53:52.305535 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:52 crc kubenswrapper[4675]: E0124 06:53:52.305779 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:53:52.805761151 +0000 UTC m=+34.101866384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.314387 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.321413 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtbqr\" (UniqueName: \"kubernetes.io/projected/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-kube-api-access-wtbqr\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.333563 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.344505 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.356532 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.362411 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.362442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.362452 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.362469 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.362481 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.369996 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.380337 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.392623 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.411977 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.430079 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.441046 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.452215 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.464967 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.465008 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.465019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.465036 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.465048 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.465016 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.477041 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.487850 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.507289 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.519742 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.529011 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.539962 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.551645 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.563991 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.566766 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.566801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.566812 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.566831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.566842 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.575112 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.585433 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.599940 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.669096 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.669136 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.669146 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.669162 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.669175 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.773759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.774304 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.774392 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.774466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.774542 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.811046 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: E0124 06:53:52.811188 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:52 crc kubenswrapper[4675]: E0124 06:53:52.811233 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:53:53.81121965 +0000 UTC m=+35.107324873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.876519 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.876782 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.876887 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.876963 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.877027 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.907744 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:13:12.390665847 +0000 UTC Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.980003 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.980053 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.980070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.980092 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.980110 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.083487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.083565 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.083587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.083643 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.083668 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.186239 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.186277 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.186290 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.186307 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.186317 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.289356 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.289388 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.289396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.289408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.289416 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.391379 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.391429 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.391446 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.391467 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.391482 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.495011 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.495066 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.495078 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.495096 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.495110 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.598841 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.598899 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.598916 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.598938 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.598954 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.701850 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.701906 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.701960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.701986 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.702004 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.720679 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.720891 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.720967 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:54:09.720928253 +0000 UTC m=+51.017033506 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721034 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721054 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721067 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.721103 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721118 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:09.721100869 +0000 UTC m=+51.017206102 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.721168 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.721211 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721346 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721406 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721408 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721367 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721470 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:09.721456787 +0000 UTC m=+51.017562050 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721422 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721494 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:09.721481918 +0000 UTC m=+51.017587181 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721562 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:09.721538249 +0000 UTC m=+51.017643542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.805211 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.805257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.805269 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.805285 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.805298 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.822429 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.822597 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.822698 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:53:55.82267041 +0000 UTC m=+37.118775713 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.907823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.908502 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.908628 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.907905 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:34:28.425675674 +0000 UTC Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.908987 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.909104 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.941863 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.942189 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.941898 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.942529 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.941892 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.942878 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.941947 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.943223 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.011632 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.011942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.012049 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.012175 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.012331 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.115361 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.115408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.115420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.115438 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.115449 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.218273 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.218573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.218866 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.219133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.219397 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.322049 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.322388 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.322501 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.322611 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.322698 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.424960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.425028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.425048 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.425079 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.425101 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.528226 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.528271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.528286 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.528306 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.528320 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.630522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.630581 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.630599 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.630624 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.630643 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.733119 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.733149 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.733157 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.733170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.733179 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.835885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.835949 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.835967 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.835990 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.836012 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.909259 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:35:59.760788087 +0000 UTC Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.939031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.939075 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.939092 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.939114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.939130 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.041404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.041450 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.041468 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.041490 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.041507 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.144223 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.144280 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.144291 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.144309 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.144322 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.247374 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.247439 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.247456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.247482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.247498 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.350656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.350768 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.350788 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.350815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.350839 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.453302 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.453357 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.453373 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.453394 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.453411 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.556300 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.556362 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.556378 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.556404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.556422 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.658880 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.659266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.659429 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.659585 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.659761 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.762698 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.762801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.762841 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.762871 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.762893 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.842035 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:55 crc kubenswrapper[4675]: E0124 06:53:55.843193 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:55 crc kubenswrapper[4675]: E0124 06:53:55.843302 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:53:59.843277705 +0000 UTC m=+41.139382948 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.865433 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.865479 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.865488 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.865500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.865509 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.910137 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:08:45.209984193 +0000 UTC Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.942209 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.942208 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:55 crc kubenswrapper[4675]: E0124 06:53:55.942369 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.942233 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:55 crc kubenswrapper[4675]: E0124 06:53:55.942421 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.942212 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:55 crc kubenswrapper[4675]: E0124 06:53:55.942461 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:53:55 crc kubenswrapper[4675]: E0124 06:53:55.942497 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.967500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.967551 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.967566 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.967589 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.967603 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.070942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.071033 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.071054 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.071076 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.071094 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.173918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.173958 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.173972 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.173987 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.173996 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.277579 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.277643 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.277666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.277695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.277760 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.381153 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.381188 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.381198 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.381210 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.381219 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.484155 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.484230 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.484247 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.484270 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.484286 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.586645 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.586697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.586743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.586763 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.586774 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.689373 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.689420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.689430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.689444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.689454 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.791773 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.791815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.791824 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.791841 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.791850 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.894466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.894534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.894547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.894568 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.894583 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.910935 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:14:21.729817775 +0000 UTC Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.005675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.006023 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.006150 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.006241 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.006311 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.109013 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.109075 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.109097 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.109148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.109175 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.213379 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.213431 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.213446 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.213464 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.213477 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.318036 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.318094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.318111 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.318137 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.318154 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.421267 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.421304 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.421318 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.421337 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.421350 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.523985 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.524033 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.524044 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.524061 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.524071 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.627859 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.628263 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.628468 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.628679 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.628951 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.732419 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.732657 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.732784 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.732875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.732971 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.836960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.837024 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.837036 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.837059 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.837072 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.912060 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:57:47.531241291 +0000 UTC Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.941191 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.941251 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.941265 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.941291 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.941345 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.941774 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:57 crc kubenswrapper[4675]: E0124 06:53:57.941964 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.942025 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.942096 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:57 crc kubenswrapper[4675]: E0124 06:53:57.942213 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.942039 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:57 crc kubenswrapper[4675]: E0124 06:53:57.942338 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:53:57 crc kubenswrapper[4675]: E0124 06:53:57.942404 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.043873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.044860 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.044904 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.044935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.044957 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.148307 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.148358 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.148367 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.148380 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.148391 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.251566 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.251613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.251628 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.251651 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.251667 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.355761 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.355801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.355809 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.355826 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.355835 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.458226 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.458267 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.458277 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.458293 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.458303 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.561112 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.561154 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.561163 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.561176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.561187 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.664041 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.664081 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.664091 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.664107 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.664116 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.766943 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.767006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.767024 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.767050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.767069 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.869606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.869655 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.869665 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.869684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.869695 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.913541 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:25:35.442483698 +0000 UTC Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.958056 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:58Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.972029 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.972070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.972081 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.972093 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.972102 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.973403 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:58Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.989135 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:58Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.000860 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:58Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.018755 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.030597 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.042848 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.066478 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.074558 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.074597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.074606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.074621 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.074630 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.080982 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.092394 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.108252 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.122501 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.136517 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.149532 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.169452 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.177030 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.177065 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.177077 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.177094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.177106 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.184164 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.195315 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.278861 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.278896 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.278960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.278977 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.278987 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.381207 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.381244 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.381253 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.381266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.381276 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.484679 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.484783 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.484810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.484842 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.484864 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.587129 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.587186 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.587202 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.587224 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.587240 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.689733 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.689783 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.689800 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.689822 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.689838 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.792249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.792344 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.792368 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.792397 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.792418 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.884582 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:59 crc kubenswrapper[4675]: E0124 06:53:59.884698 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:59 crc kubenswrapper[4675]: E0124 06:53:59.884755 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:54:07.884741072 +0000 UTC m=+49.180846295 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.895756 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.895836 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.895845 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.895858 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.895866 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.914536 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:48:25.375823979 +0000 UTC Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.942023 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:59 crc kubenswrapper[4675]: E0124 06:53:59.942227 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.942306 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.942367 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.942399 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:59 crc kubenswrapper[4675]: E0124 06:53:59.942498 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:59 crc kubenswrapper[4675]: E0124 06:53:59.942593 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:59 crc kubenswrapper[4675]: E0124 06:53:59.942794 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.998938 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.999015 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.999042 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.999073 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.999094 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.101969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.101998 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.102006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.102019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.102028 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.204741 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.205017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.205099 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.205169 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.205248 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.307913 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.307978 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.307995 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.308020 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.308038 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.410651 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.411026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.411201 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.411357 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.411529 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.514532 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.514582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.514602 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.514629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.514650 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.617409 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.617658 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.617874 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.618011 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.618174 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.725908 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.725962 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.725979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.725995 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.726008 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.829072 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.829133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.829148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.829167 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.829179 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.915282 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:41:07.148235642 +0000 UTC Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.932124 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.932177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.932190 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.932208 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.932223 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.012200 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.012257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.012274 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.012298 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.012310 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.028017 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:01Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.032783 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.032834 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.032851 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.032870 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.032885 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.046741 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:01Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.050299 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.050334 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.050346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.050363 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.050375 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.063556 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:01Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.067662 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.067741 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.067762 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.067777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.067785 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.081926 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:01Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.087347 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.087387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.087413 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.087430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.087441 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.101919 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:01Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.102030 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.103856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.103890 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.103900 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.103914 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.103925 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.424747 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.424794 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.424805 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.424820 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.424830 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.531753 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.531812 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.531823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.531843 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.531857 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.634294 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.634331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.634341 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.634355 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.634365 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.736759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.736785 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.736792 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.736806 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.736814 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.839249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.839289 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.839297 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.839313 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.839324 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.916339 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:28:11.535786337 +0000 UTC Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941489 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941554 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.941651 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941505 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941829 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941848 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941869 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941878 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.942066 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.942075 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.942210 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.942434 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.044006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.044046 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.044057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.044074 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.044085 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.146308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.146340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.146349 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.146362 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.146371 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.248773 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.248809 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.248819 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.248832 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.248841 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.350756 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.350790 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.350801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.350815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.350826 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.453274 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.453566 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.453701 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.453864 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.453978 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.556732 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.556768 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.556777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.556797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.556814 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.658911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.659234 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.659249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.659268 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.659281 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.761660 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.761702 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.761739 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.761758 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.761814 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.864785 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.864814 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.864823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.864835 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.864842 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.917443 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:38:11.184778358 +0000 UTC Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.967494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.967531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.967540 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.967553 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.967563 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.069968 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.070006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.070014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.070056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.070067 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.172358 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.172445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.172471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.172498 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.172518 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.274689 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.274742 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.274753 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.274771 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.274786 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.377350 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.377391 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.377405 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.377420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.377428 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.479991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.480054 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.480071 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.480095 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.480114 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.582597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.582631 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.582641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.582654 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.582664 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.685434 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.685482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.685497 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.685517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.685532 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.787591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.787639 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.787654 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.787673 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.787688 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.889994 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.890238 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.890323 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.890408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.890471 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.918399 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:05:24.912853878 +0000 UTC Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.942077 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:03 crc kubenswrapper[4675]: E0124 06:54:03.942207 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.942077 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.942362 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:03 crc kubenswrapper[4675]: E0124 06:54:03.942422 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:03 crc kubenswrapper[4675]: E0124 06:54:03.942445 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.942584 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:03 crc kubenswrapper[4675]: E0124 06:54:03.942792 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.993482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.993536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.993552 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.993569 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.993580 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.096071 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.096111 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.096120 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.096133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.096142 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.199257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.199295 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.199307 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.199323 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.199336 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.302194 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.302239 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.302250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.302269 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.302281 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.404823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.404942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.404961 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.404989 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.405012 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.507065 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.507396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.507633 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.507835 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.507989 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.610419 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.610456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.610463 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.610477 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.610485 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.713420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.713470 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.713483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.713500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.713512 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.816252 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.816298 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.816313 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.816332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.816344 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.918536 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:38:01.37459334 +0000 UTC Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.919303 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.919372 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.919387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.919406 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.919418 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.942604 4675 scope.go:117] "RemoveContainer" containerID="8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.022691 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.022766 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.022783 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.022813 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.022828 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.125639 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.125969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.126141 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.126257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.126369 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.232913 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.232955 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.232970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.232990 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.233043 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.335855 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.335890 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.335898 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.335912 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.335922 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.438327 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.438366 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.438377 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.438392 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.438402 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.442056 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/1.log" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.445432 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.445547 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.469003 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.492627 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.512914 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.532011 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.540435 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.540471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.540483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.540500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.540512 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.545956 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.562172 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.575997 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.591363 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.606131 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.616097 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.626893 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.637368 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.643411 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.643442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.643451 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.643464 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.643473 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.653151 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.664100 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.676674 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.689474 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.711661 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.745614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.745654 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.745664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.745678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.745688 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.848158 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.848469 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.848567 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.848648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.848782 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.919947 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:41:19.790179828 +0000 UTC Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.942334 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.942406 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:05 crc kubenswrapper[4675]: E0124 06:54:05.942691 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.942505 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:05 crc kubenswrapper[4675]: E0124 06:54:05.943051 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.942476 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:05 crc kubenswrapper[4675]: E0124 06:54:05.943259 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:05 crc kubenswrapper[4675]: E0124 06:54:05.942867 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.950867 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.950922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.950932 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.950946 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.950957 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.052860 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.052901 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.052913 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.052929 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.052939 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.155604 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.155684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.155704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.155764 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.155782 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.258380 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.258421 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.258430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.258445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.258454 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.362254 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.362314 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.362332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.362355 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.362371 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.452807 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/2.log" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.453783 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/1.log" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.459077 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662" exitCode=1 Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.459159 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.459211 4675 scope.go:117] "RemoveContainer" containerID="8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.460666 4675 scope.go:117] "RemoveContainer" containerID="0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662" Jan 24 06:54:06 crc kubenswrapper[4675]: E0124 06:54:06.461032 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.466163 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.466534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.466946 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.467545 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.468284 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.488623 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.507864 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.528951 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.554781 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.571856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.571895 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.571905 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.571922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.571933 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.582015 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.589854 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.598115 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.614235 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.624143 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.634893 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.647964 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.663480 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.674132 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.674170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.674180 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.674196 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.674211 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.675671 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.684379 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.700585 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.710598 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.722179 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.776525 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.776568 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.776578 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.776593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.776606 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.880001 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.880038 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.880050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.880067 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.880080 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.920681 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:03:51.51263916 +0000 UTC Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.982405 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.982453 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.982474 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.982500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.982521 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.085421 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.085468 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.085478 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.085494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.085504 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.188444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.188520 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.188543 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.188571 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.188590 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.292166 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.292198 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.292245 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.292264 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.292275 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.395340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.395393 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.395419 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.395439 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.395450 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.463037 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/2.log" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.497591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.497638 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.497646 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.497659 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.497668 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.601170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.601228 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.601236 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.601250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.601261 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.705090 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.705151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.705171 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.705198 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.705219 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.807859 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.807915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.807937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.807966 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.807987 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.887573 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:07 crc kubenswrapper[4675]: E0124 06:54:07.887826 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:54:07 crc kubenswrapper[4675]: E0124 06:54:07.887947 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:54:23.887910569 +0000 UTC m=+65.184015832 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.910595 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.910636 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.910647 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.910663 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.910675 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.920868 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:22:17.366114499 +0000 UTC Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.942313 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.942370 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:07 crc kubenswrapper[4675]: E0124 06:54:07.942421 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.942453 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.942393 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:07 crc kubenswrapper[4675]: E0124 06:54:07.942562 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:07 crc kubenswrapper[4675]: E0124 06:54:07.942863 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:07 crc kubenswrapper[4675]: E0124 06:54:07.942963 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.013480 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.013513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.013522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.013537 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.013548 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.068929 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.069761 4675 scope.go:117] "RemoveContainer" containerID="0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662" Jan 24 06:54:08 crc kubenswrapper[4675]: E0124 06:54:08.069976 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.082500 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.098242 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.107437 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.116301 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.116331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.116340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.116369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.116377 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.128532 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.141664 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.153916 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.164191 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.179591 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.195653 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.207161 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.218542 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.218915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.218939 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.218948 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.218961 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.218970 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.232388 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.253332 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.274087 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.291013 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.308769 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.321334 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.321383 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.321394 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.321413 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.321426 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.328385 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.424769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.424842 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.424854 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.424875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.424888 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.527210 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.527305 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.527325 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.527387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.527406 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.630033 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.630124 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.630142 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.630193 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.630211 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.732582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.732635 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.732653 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.732675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.732690 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.834878 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.834932 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.834947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.834978 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.834989 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.921205 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:00:33.53604663 +0000 UTC Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.928785 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.938244 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.941308 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.944775 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.944805 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.944815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.944829 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.944837 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.958091 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.966313 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.984380 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.996116 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.006892 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.026095 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.036418 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.045578 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.047169 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.047229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.047238 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.047250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.047259 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.058042 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.070400 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.084955 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.095853 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.108110 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.125882 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.139913 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.148975 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.149007 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.149016 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.149028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.149038 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.156400 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.166938 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.180439 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.191642 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.202907 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.216288 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.227967 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.238002 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.250369 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.252399 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.252444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.252456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.252473 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.252487 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.263676 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.294935 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.306549 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.317523 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.329887 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.340567 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.358742 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.358784 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.358824 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.358843 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.358855 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.359375 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.369491 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.390155 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.402309 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.461173 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.461509 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.461521 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.461535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.461546 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.564342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.564408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.564424 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.564449 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.564466 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.668102 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.668260 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.668292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.668322 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.668346 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.770996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.771068 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.771085 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.771111 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.771133 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.804655 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.804776 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.804798 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.804820 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.804856 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.804932 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.804969 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:54:41.804935322 +0000 UTC m=+83.101040575 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805018 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:41.805003794 +0000 UTC m=+83.101109117 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805045 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805114 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805130 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805847 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805142 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:41.805116576 +0000 UTC m=+83.101221839 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805138 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805929 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.806006 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:41.805982848 +0000 UTC m=+83.102088121 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805886 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.806115 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:41.806094091 +0000 UTC m=+83.102199314 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.873073 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.873107 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.873119 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.873134 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.873145 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.921853 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:46:11.814845102 +0000 UTC Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.942472 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.942524 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.942571 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.942619 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.942609 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.942798 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.942859 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.942885 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.975209 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.975253 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.975265 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.975281 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.975295 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.077108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.077162 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.077174 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.077192 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.077203 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.179327 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.179366 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.179378 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.179397 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.179409 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.282979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.283011 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.283019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.283032 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.283041 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.385930 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.386147 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.386165 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.386181 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.386191 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.488602 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.488651 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.488662 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.488679 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.488690 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.590823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.590856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.590864 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.590876 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.590885 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.693543 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.693583 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.693599 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.693621 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.693635 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.796487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.796559 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.796584 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.796616 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.796640 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.899687 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.899769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.899786 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.899804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.899822 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.922531 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:57:07.596258433 +0000 UTC Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.001743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.001780 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.001788 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.001802 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.001815 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.103681 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.103755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.103765 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.103778 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.103789 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.206924 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.206991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.207014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.207090 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.207119 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.310090 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.310209 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.310231 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.310254 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.310272 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.412712 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.412765 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.412774 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.412792 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.412801 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.471445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.471513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.471529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.471552 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.471569 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.495467 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.499333 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.499370 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.499381 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.499397 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.499407 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.513827 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.518942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.518974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.518985 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.519002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.519013 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.533190 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.537953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.538055 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.538067 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.538088 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.538105 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.554091 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.558172 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.558205 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.558216 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.558229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.558239 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.568861 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.568990 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.570300 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.570343 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.570351 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.570364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.570373 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.672651 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.672750 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.672779 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.672810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.672835 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.775700 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.775796 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.775819 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.775849 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.775871 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.878250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.878306 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.878315 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.878329 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.878339 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.922773 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:19:07.787339241 +0000 UTC Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.941535 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.941614 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.941626 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.941563 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.941737 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.941848 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.941918 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.942015 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.980608 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.980642 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.980650 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.980663 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.980670 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.082763 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.082800 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.082812 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.082827 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.082839 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.185085 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.185148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.185166 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.185191 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.185228 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.288077 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.288140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.288156 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.288177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.288195 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.390548 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.390601 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.390609 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.390623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.390633 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.492614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.492658 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.492669 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.492692 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.492745 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.595108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.595151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.595161 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.595178 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.595190 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.698094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.698141 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.698153 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.698170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.698182 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.804536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.804618 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.804644 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.804678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.804710 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.907386 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.907431 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.907443 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.907462 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.907477 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.923036 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:50:37.021285994 +0000 UTC Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.010135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.010173 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.010187 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.010203 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.010215 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.112754 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.112790 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.112802 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.112819 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.112832 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.215530 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.215593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.215614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.215640 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.215659 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.318298 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.318332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.318342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.318358 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.318370 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.420973 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.421009 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.421025 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.421045 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.421060 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.524590 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.524655 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.524678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.524706 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.524759 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.628384 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.628441 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.628457 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.628480 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.628497 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.731118 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.731168 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.731176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.731187 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.731195 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.834015 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.834079 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.834101 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.834128 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.834145 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.923469 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:04:20.891841778 +0000 UTC Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.936493 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.936531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.936539 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.936553 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.936561 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.941746 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.941747 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.941804 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.941848 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:13 crc kubenswrapper[4675]: E0124 06:54:13.941971 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:13 crc kubenswrapper[4675]: E0124 06:54:13.942050 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:13 crc kubenswrapper[4675]: E0124 06:54:13.942100 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:13 crc kubenswrapper[4675]: E0124 06:54:13.942167 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.039432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.039510 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.039534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.039559 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.039578 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.143638 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.143750 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.143770 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.143793 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.143809 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.247482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.247579 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.247612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.247648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.247673 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.350582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.350638 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.350654 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.350675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.350690 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.453255 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.453282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.453290 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.453302 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.453310 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.555997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.556064 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.556086 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.556114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.556134 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.658881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.658915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.658923 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.658937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.658946 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.763040 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.763097 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.763108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.763126 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.763138 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.866440 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.866554 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.866627 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.866652 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.866669 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.924336 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:02:06.318468598 +0000 UTC Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.969330 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.969397 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.969408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.969425 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.969438 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.071952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.071981 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.071991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.072004 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.072012 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.174623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.174664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.174676 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.174691 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.174700 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.277445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.277488 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.277498 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.277513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.277523 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.381493 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.381558 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.381572 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.381594 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.381608 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.484476 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.484527 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.484538 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.484555 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.484566 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.587208 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.587248 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.587257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.587273 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.587284 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.689535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.689602 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.689614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.689626 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.689635 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.792121 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.792159 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.792170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.792188 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.792199 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.894616 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.894671 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.894682 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.894699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.894708 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.924968 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:17:47.274777885 +0000 UTC Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.942262 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.942334 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:15 crc kubenswrapper[4675]: E0124 06:54:15.942376 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.942453 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:15 crc kubenswrapper[4675]: E0124 06:54:15.942566 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.942616 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:15 crc kubenswrapper[4675]: E0124 06:54:15.942680 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:15 crc kubenswrapper[4675]: E0124 06:54:15.942831 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.997333 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.997364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.997371 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.997389 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.997398 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.100447 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.100479 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.100490 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.100504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.100514 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.202507 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.202538 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.202546 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.202558 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.202566 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.305474 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.305502 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.305511 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.305522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.305530 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.407342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.407377 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.407389 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.407404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.407416 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.510382 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.510430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.510444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.510464 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.510478 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.612621 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.612689 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.612710 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.612775 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.612798 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.715961 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.716065 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.716087 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.716153 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.716178 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.819684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.819817 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.819880 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.819909 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.819969 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.923490 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.923535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.923544 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.923562 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.923571 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.925657 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:40:07.10918168 +0000 UTC Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.026145 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.026214 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.026226 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.026248 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.026258 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.129311 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.129347 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.129359 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.129375 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.129386 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.232032 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.232075 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.232108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.232121 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.232130 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.335108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.335167 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.335176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.335192 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.335202 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.437708 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.437768 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.437780 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.437796 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.437810 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.540666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.540771 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.540789 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.540816 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.540834 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.648938 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.648974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.648986 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.649002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.649015 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.751770 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.751840 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.751859 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.751882 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.751896 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.855130 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.855175 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.855187 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.855202 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.855214 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.926343 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:29:07.319592011 +0000 UTC Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.941648 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.941708 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:17 crc kubenswrapper[4675]: E0124 06:54:17.941800 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:17 crc kubenswrapper[4675]: E0124 06:54:17.941935 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.941996 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.942049 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:17 crc kubenswrapper[4675]: E0124 06:54:17.942205 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:17 crc kubenswrapper[4675]: E0124 06:54:17.942332 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.958002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.958071 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.958094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.958121 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.958141 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.061211 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.061256 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.061268 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.061285 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.061297 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.165279 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.165321 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.165332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.165346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.165355 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.268886 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.268965 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.268987 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.269020 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.269043 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.371759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.371811 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.371846 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.371866 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.371878 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.475124 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.475195 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.475218 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.475246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.475269 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.577274 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.577345 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.577362 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.577387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.577404 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.681139 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.681205 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.681222 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.681246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.681277 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.784414 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.784476 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.784486 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.784505 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.784519 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.887347 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.887395 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.887404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.887422 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.887433 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.926581 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:31:17.822097179 +0000 UTC Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.957076 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.980117 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.990970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.991034 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.991047 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.991066 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.991077 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.992430 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.006355 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.020312 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.034823 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.047408 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.059287 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.071410 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.086049 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.093303 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.093331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.093339 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.093352 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.093361 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.100768 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.111368 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.128287 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.137975 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.153477 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.162504 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.172667 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.182033 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.195336 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.195387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.195422 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.195439 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.195450 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.298002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.298058 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.298075 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.298099 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.298119 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.401536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.401578 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.401590 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.401604 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.401613 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.505382 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.505413 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.506079 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.506116 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.506128 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.609477 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.609515 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.609526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.609541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.609551 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.712609 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.712664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.712679 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.712699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.712711 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.815876 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.815924 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.815935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.815951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.815963 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.917539 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.917574 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.917585 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.917601 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.917612 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.926932 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 15:57:06.654012238 +0000 UTC Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.942334 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:19 crc kubenswrapper[4675]: E0124 06:54:19.942475 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.942645 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:19 crc kubenswrapper[4675]: E0124 06:54:19.942758 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.942954 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.942994 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:19 crc kubenswrapper[4675]: E0124 06:54:19.943174 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:19 crc kubenswrapper[4675]: E0124 06:54:19.943015 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.020010 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.020090 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.020133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.020166 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.020189 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.122334 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.122364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.122373 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.122387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.122397 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.224759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.224818 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.224836 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.224857 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.224874 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.327620 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.327684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.327695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.327707 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.327728 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.429739 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.429809 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.429818 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.429831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.429839 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.532492 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.532543 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.532556 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.532573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.532584 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.634799 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.634834 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.634844 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.634859 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.634869 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.737342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.737381 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.737391 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.737406 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.737417 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.840035 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.840063 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.840070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.840082 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.840091 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.927628 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:03:27.089314885 +0000 UTC Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.942423 4675 scope.go:117] "RemoveContainer" containerID="0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662" Jan 24 06:54:20 crc kubenswrapper[4675]: E0124 06:54:20.942619 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.944750 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.944804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.944816 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.944827 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.944836 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.047804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.047856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.047873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.047895 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.047913 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.150537 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.150571 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.150578 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.150592 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.150601 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.254438 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.254487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.254504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.254528 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.254544 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.356986 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.357028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.357037 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.357050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.357059 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.459534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.459582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.459593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.459610 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.459621 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.566675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.566756 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.566771 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.566795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.566810 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.668884 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.668942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.668953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.668966 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.668975 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.771853 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.771935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.771963 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.771993 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.772015 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.807055 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.807174 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.807196 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.807270 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.807289 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.824617 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:21Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.829942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.830001 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.830014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.830031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.830044 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.844022 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:21Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.847882 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.847911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.847922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.847939 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.847951 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.861170 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:21Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.865652 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.865688 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.865699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.865779 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.865795 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.888301 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:21Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.891862 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.891910 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.891920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.891933 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.891942 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.905607 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:21Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.905744 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.907127 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.907158 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.907169 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.907184 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.907195 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.928859 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:27:38.489862724 +0000 UTC Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.942453 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.942489 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.942481 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.942468 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.942588 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.942768 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.942803 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.942877 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.009969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.010007 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.010015 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.010028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.010037 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.112412 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.112473 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.112485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.112504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.112519 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.214408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.214443 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.214451 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.214463 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.214472 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.317253 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.317297 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.317308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.317325 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.317337 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.420942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.420990 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.421001 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.421019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.421031 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.523218 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.523284 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.523296 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.523330 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.523343 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.625952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.625996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.626003 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.626017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.626028 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.729638 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.729699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.729710 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.729745 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.729762 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.832274 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.832320 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.832331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.832349 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.832362 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.929029 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:20:27.946098278 +0000 UTC Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.934996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.935031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.935093 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.935110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.935122 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.036811 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.036847 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.036885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.036904 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.036915 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.139645 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.139690 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.139699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.139737 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.139752 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.243004 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.243057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.243069 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.243105 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.243126 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.345984 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.346027 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.346040 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.346056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.346097 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.447795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.447844 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.447856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.447871 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.447882 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.549935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.549972 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.549983 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.550002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.550013 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.652596 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.652634 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.652643 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.652656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.652665 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.755084 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.755137 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.755156 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.755178 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.755191 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.858136 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.858177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.858191 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.858209 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.858222 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.929649 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:58:25.903883346 +0000 UTC Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.942080 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:23 crc kubenswrapper[4675]: E0124 06:54:23.942214 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.942271 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:23 crc kubenswrapper[4675]: E0124 06:54:23.942327 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.942371 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:23 crc kubenswrapper[4675]: E0124 06:54:23.942434 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.942477 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:23 crc kubenswrapper[4675]: E0124 06:54:23.942529 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.958309 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:23 crc kubenswrapper[4675]: E0124 06:54:23.958439 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:54:23 crc kubenswrapper[4675]: E0124 06:54:23.958492 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:54:55.95847452 +0000 UTC m=+97.254579743 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.960908 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.960952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.960962 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.960979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.960991 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.063225 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.063253 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.063265 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.063282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.063294 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.165494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.165526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.165535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.165547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.165556 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.267987 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.268040 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.268052 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.268069 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.268119 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.371202 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.371240 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.371250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.371266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.371278 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.474099 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.474126 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.474133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.474161 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.474170 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.576548 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.576590 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.576599 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.576615 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.576625 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.679352 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.679409 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.679424 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.679517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.679540 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.782004 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.782052 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.782065 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.782084 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.782095 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.884921 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.884961 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.884974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.884990 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.885002 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.930101 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:10:18.511010389 +0000 UTC Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.952131 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.987308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.987346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.987357 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.987374 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.987386 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.089639 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.089675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.089684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.089734 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.089744 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.191735 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.191992 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.192063 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.192137 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.192203 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.294200 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.294430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.294506 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.294603 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.294677 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.397003 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.397296 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.397393 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.397522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.397742 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.500427 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.500470 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.500482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.500500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.500514 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.523330 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/0.log" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.523415 4675 generic.go:334] "Generic (PLEG): container finished" podID="61e129ca-c9dc-4375-b373-5eec702744bd" containerID="6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a" exitCode=1 Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.523590 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerDied","Data":"6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.524043 4675 scope.go:117] "RemoveContainer" containerID="6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.544040 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.555807 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.566915 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.588586 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.602327 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.602881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.602937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.602949 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.602967 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.603011 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.618487 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.635236 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.645276 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.657208 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.667895 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.679253 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.688839 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.702175 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.706330 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.706416 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.706432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.706452 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.706468 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.715604 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.727848 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.756327 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.772571 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.787141 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.797414 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.809674 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.809711 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.809743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.809758 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.809768 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.912224 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.912258 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.912266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.912279 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.912288 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.930393 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:39:31.368348704 +0000 UTC Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.941684 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.941732 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.941775 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:25 crc kubenswrapper[4675]: E0124 06:54:25.941777 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.941831 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:25 crc kubenswrapper[4675]: E0124 06:54:25.941971 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:25 crc kubenswrapper[4675]: E0124 06:54:25.942094 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:25 crc kubenswrapper[4675]: E0124 06:54:25.942189 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.015070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.015103 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.015130 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.015144 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.015153 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.117535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.117575 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.117583 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.117596 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.117605 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.219706 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.219769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.219781 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.219797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.219827 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.322249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.322305 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.322314 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.322329 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.322339 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.424629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.424675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.424685 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.424699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.424711 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.526140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.526179 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.526187 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.526204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.526214 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.527405 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/0.log" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.527453 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerStarted","Data":"6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.537458 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.546317 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.554049 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.571575 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.581358 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.591080 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.605083 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.616429 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.626913 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.628279 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.628305 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.628314 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.628329 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.628339 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.638264 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.648583 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.660079 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.670301 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.682094 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.694581 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.713676 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.725634 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.730610 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.730646 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.730657 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.730672 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.730684 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.735946 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.744781 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.832801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.832846 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.832858 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.832877 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.832890 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.930868 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:06:52.218392328 +0000 UTC Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.935420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.935454 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.935466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.935482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.935494 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.038471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.038513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.038524 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.038539 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.038550 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.140740 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.140788 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.140803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.140823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.140838 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.243075 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.243104 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.243114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.243128 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.243139 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.345014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.345041 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.345048 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.345059 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.345068 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.446742 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.446770 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.446778 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.446790 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.446798 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.548647 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.548685 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.548695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.548709 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.548740 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.651482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.651518 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.651527 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.651540 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.651550 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.754108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.754165 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.754182 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.754205 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.754222 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.856840 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.856908 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.856920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.856937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.856948 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.931140 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:07:15.711185069 +0000 UTC Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.942497 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.942551 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.942513 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.942688 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:27 crc kubenswrapper[4675]: E0124 06:54:27.942663 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:27 crc kubenswrapper[4675]: E0124 06:54:27.942755 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:27 crc kubenswrapper[4675]: E0124 06:54:27.942827 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:27 crc kubenswrapper[4675]: E0124 06:54:27.942920 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.959396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.959483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.959497 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.959795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.959816 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.062234 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.062268 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.062278 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.062292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.062304 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.164223 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.164273 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.164282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.164297 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.164307 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.266450 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.266493 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.266506 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.266523 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.266533 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.368873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.368918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.368927 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.368940 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.368948 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.471546 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.471582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.471594 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.471610 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.471622 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.574595 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.574645 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.574657 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.574677 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.574690 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.676866 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.676936 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.676947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.676960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.676970 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.779468 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.779522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.779538 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.779583 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.779599 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.882186 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.882247 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.882265 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.882282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.882302 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.932129 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:18:13.6025108 +0000 UTC Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.956286 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:28Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.968248 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:28Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.978458 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:28Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.984547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.984585 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.984593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.984608 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.984617 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.988273 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:28Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.000567 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:28Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.009661 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.020395 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.030163 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.050595 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.062161 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.078075 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.088540 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.088578 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.088588 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.088601 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.088610 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.096555 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.108231 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.116664 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.128200 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.137598 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.153001 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.164362 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.175818 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.190577 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.190612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.190624 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.190641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.190653 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.293354 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.293392 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.293417 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.293434 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.293445 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.395769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.395794 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.395803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.395817 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.395828 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.497994 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.498022 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.498030 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.498043 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.498052 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.600395 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.600630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.600641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.600658 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.600669 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.704343 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.704389 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.704404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.704422 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.704437 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.806563 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.806623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.806633 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.806648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.806660 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.910698 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.910795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.910808 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.910823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.910833 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.932627 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:04:42.135659149 +0000 UTC Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.941975 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:29 crc kubenswrapper[4675]: E0124 06:54:29.942099 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.942277 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:29 crc kubenswrapper[4675]: E0124 06:54:29.942345 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.942475 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:29 crc kubenswrapper[4675]: E0124 06:54:29.942540 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.942668 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:29 crc kubenswrapper[4675]: E0124 06:54:29.942756 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.013606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.013648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.013658 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.013674 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.013686 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.115561 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.115592 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.115600 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.115613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.115623 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.217950 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.218009 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.218020 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.218036 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.218050 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.320307 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.320356 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.320368 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.320385 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.320398 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.422683 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.422735 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.422751 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.422765 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.422776 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.525106 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.525143 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.525151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.525164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.525176 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.627331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.627367 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.627375 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.627390 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.627399 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.731110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.731244 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.731264 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.731288 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.731305 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.835340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.835388 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.835403 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.835422 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.835436 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.933193 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:11:49.896503751 +0000 UTC Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.937533 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.937595 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.937607 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.937623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.937634 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.040108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.040144 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.040151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.040183 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.040192 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.142864 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.142904 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.142915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.142953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.142964 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.245640 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.245684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.245693 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.245708 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.245732 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.348632 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.348693 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.348707 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.348743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.348755 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.450684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.450751 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.450791 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.450814 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.450829 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.553204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.553238 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.553246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.553294 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.553306 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.655742 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.655797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.655813 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.655831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.655843 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.758301 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.758363 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.758373 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.758392 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.758405 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.861466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.861514 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.861530 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.861552 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.861570 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.934241 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:17:27.962671975 +0000 UTC Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.941795 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.941868 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.941870 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:31 crc kubenswrapper[4675]: E0124 06:54:31.941988 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.942067 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:31 crc kubenswrapper[4675]: E0124 06:54:31.942130 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:31 crc kubenswrapper[4675]: E0124 06:54:31.942091 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:31 crc kubenswrapper[4675]: E0124 06:54:31.942316 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.963767 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.963804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.963816 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.963837 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.963852 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.065950 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.065994 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.066005 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.066021 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.066031 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.168027 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.168070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.168085 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.168101 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.168115 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.188055 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.188116 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.188132 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.188157 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.188233 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: E0124 06:54:32.208491 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.212869 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.212961 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.212981 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.213006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.213074 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: E0124 06:54:32.232763 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.237350 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.237426 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.237450 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.237480 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.237506 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: E0124 06:54:32.259676 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.264417 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.264449 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.264461 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.264476 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.264486 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: E0124 06:54:32.280417 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.284877 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.284956 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.284981 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.285013 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.285036 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: E0124 06:54:32.299328 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:32 crc kubenswrapper[4675]: E0124 06:54:32.299472 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.301124 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.301195 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.301212 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.301240 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.301257 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.404487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.404558 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.404577 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.404602 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.404623 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.507257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.507323 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.507339 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.507364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.507381 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.609911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.609958 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.609969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.609986 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.609999 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.712398 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.712471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.712485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.712500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.712511 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.814680 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.814738 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.814754 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.814769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.814779 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.917243 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.917308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.917325 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.917342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.917351 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.934830 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:23:19.637342851 +0000 UTC Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.019576 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.019612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.019623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.019637 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.019674 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.122510 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.122603 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.122619 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.122670 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.122682 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.225109 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.225163 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.225174 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.225192 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.225205 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.328111 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.328177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.328189 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.328212 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.328227 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.431168 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.431217 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.431229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.431249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.431261 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.534682 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.534784 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.534804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.534828 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.534846 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.637509 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.637781 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.637792 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.637810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.637823 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.740113 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.740181 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.740198 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.740226 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.740244 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.842192 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.842525 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.842660 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.842823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.842961 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.935686 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:49:43.937959265 +0000 UTC Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.941952 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.942083 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.942004 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.941980 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:33 crc kubenswrapper[4675]: E0124 06:54:33.942368 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:33 crc kubenswrapper[4675]: E0124 06:54:33.942538 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:33 crc kubenswrapper[4675]: E0124 06:54:33.942651 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:33 crc kubenswrapper[4675]: E0124 06:54:33.942846 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.946202 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.946241 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.946252 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.946268 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.946282 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.048996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.049049 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.049061 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.049076 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.049089 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.151785 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.151832 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.151843 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.151859 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.151871 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.265410 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.265479 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.265498 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.265531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.265574 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.368295 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.368351 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.368368 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.368391 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.368411 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.471465 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.471533 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.471553 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.471572 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.471583 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.574626 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.574685 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.574702 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.574759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.574778 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.678039 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.678127 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.678145 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.678168 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.678187 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.781016 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.781096 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.781117 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.781140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.781155 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.883338 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.883385 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.883396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.883410 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.883421 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.936225 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:53:20.95012673 +0000 UTC Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.942376 4675 scope.go:117] "RemoveContainer" containerID="0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.986046 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.986073 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.986099 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.986113 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.986122 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.089182 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.089218 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.089243 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.089257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.089266 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.192513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.192580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.192593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.192608 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.192642 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.295154 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.295392 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.295488 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.295623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.295808 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.398935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.399049 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.399070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.399094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.399112 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.510140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.511804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.511913 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.511963 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.511985 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.561786 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/2.log" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.567321 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.568250 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.588638 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.606369 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.615003 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.615051 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.615069 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.615092 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.615106 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.632888 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.645769 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.660069 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.673299 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.686251 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.701605 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.714271 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.717757 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.717795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.717803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.717818 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.717826 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.733232 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.743853 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.756233 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.772045 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.791974 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.805781 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.820962 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.821017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.821028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.821042 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.821053 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.822654 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.839003 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.865697 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.875317 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.923492 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.923533 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.923564 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.923584 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.923599 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.936831 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:43:54.563550952 +0000 UTC Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.942087 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.942146 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.942103 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.942146 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:35 crc kubenswrapper[4675]: E0124 06:54:35.942202 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:35 crc kubenswrapper[4675]: E0124 06:54:35.942304 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:35 crc kubenswrapper[4675]: E0124 06:54:35.942412 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:35 crc kubenswrapper[4675]: E0124 06:54:35.942445 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.025671 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.025704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.025711 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.025740 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.025749 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.128036 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.128078 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.128088 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.128104 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.128115 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.229994 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.230034 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.230044 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.230058 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.230070 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.333439 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.333504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.333529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.333559 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.333581 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.436587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.436642 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.436652 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.436670 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.436681 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.538457 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.538519 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.538531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.538547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.538610 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.572481 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/3.log" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.573348 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/2.log" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.576712 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" exitCode=1 Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.576790 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.576828 4675 scope.go:117] "RemoveContainer" containerID="0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.578067 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 06:54:36 crc kubenswrapper[4675]: E0124 06:54:36.578406 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.593962 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.606395 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.620418 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.635962 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.640271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.640309 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.640321 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.640337 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.640350 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.655407 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.667294 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.676703 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.702249 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:36Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"UDP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nF0124 06:54:36.196176 6594 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.714672 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.727759 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.743131 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.743197 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.743211 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.743255 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.743279 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.749204 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.767544 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.789192 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.806033 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.821819 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.831491 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.839924 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.845367 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.845420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.845432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.845450 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.845462 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.851838 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.861986 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.937234 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:54:56.457892262 +0000 UTC Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.947294 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.947330 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.947355 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.947370 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.947379 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.049788 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.049922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.049941 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.049964 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.050001 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.153605 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.153656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.153666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.153687 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.153701 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.255652 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.255687 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.255697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.255712 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.255764 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.358051 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.358079 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.358088 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.358100 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.358109 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.460531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.460573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.460582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.460595 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.460603 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.563855 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.563937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.563963 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.563997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.564043 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.583609 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/3.log" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.589260 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 06:54:37 crc kubenswrapper[4675]: E0124 06:54:37.589648 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.610108 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.626131 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.652558 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:36Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"UDP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nF0124 06:54:36.196176 6594 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.666441 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.666481 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.666494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.666512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.666528 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.668520 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.680687 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.697611 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.717473 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.736972 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.748956 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.762102 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.768430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.768473 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.768485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.768507 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.768523 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.780857 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.791253 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.803192 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.821865 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.833188 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.844654 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.860476 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.870870 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.870901 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.870911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.870926 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.870940 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.873857 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.888101 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.938146 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:18:18.537932568 +0000 UTC Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.941514 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:37 crc kubenswrapper[4675]: E0124 06:54:37.941783 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.941952 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:37 crc kubenswrapper[4675]: E0124 06:54:37.942125 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.942341 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.942380 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:37 crc kubenswrapper[4675]: E0124 06:54:37.942442 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:37 crc kubenswrapper[4675]: E0124 06:54:37.942622 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.977379 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.977667 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.977876 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.978466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.978481 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.081239 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.081292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.081309 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.081331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.081348 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.184536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.184582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.184593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.184613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.184625 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.287601 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.287653 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.287670 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.287692 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.287708 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.390257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.390294 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.390304 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.390317 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.390327 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.494989 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.495022 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.495029 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.495042 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.495051 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.597498 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.597701 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.597846 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.597929 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.598004 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.700194 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.700494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.700637 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.700778 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.700914 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.803745 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.803794 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.803810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.803832 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.803848 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.907308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.907342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.907353 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.907369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.907381 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.938701 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 02:47:42.186756429 +0000 UTC Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.956148 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.968349 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.979745 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.989996 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.002570 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.012771 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.012807 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.012816 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.012831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.012840 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.015680 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.031813 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.046325 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.076397 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.092580 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.105458 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.115580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.115817 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.115885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.116007 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.116127 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.117788 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.139555 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.151675 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.168922 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.183394 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.206487 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:36Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"UDP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nF0124 06:54:36.196176 6594 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.218375 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.218436 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.218464 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.218488 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.218504 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.221289 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.231583 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.322115 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.322428 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.322554 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.322647 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.322769 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.425020 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.425350 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.425455 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.425538 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.425611 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.528589 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.528626 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.528634 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.528648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.528656 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.631544 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.631597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.631606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.631621 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.631630 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.734029 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.734073 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.734087 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.734103 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.734115 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.836761 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.836804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.836815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.836831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.836841 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.938913 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:40:38.027009065 +0000 UTC Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.939445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.939477 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.939489 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.939503 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.939512 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.941944 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.941962 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.941968 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:39 crc kubenswrapper[4675]: E0124 06:54:39.942041 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.942164 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:39 crc kubenswrapper[4675]: E0124 06:54:39.942234 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:39 crc kubenswrapper[4675]: E0124 06:54:39.942352 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:39 crc kubenswrapper[4675]: E0124 06:54:39.942481 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.042000 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.042033 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.042041 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.042056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.042068 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.144976 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.145009 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.145018 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.145031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.145038 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.247490 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.247537 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.247550 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.247565 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.247576 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.350275 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.350328 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.350345 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.350368 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.350386 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.452983 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.453030 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.453045 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.453063 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.453077 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.555538 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.555598 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.555617 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.555643 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.555660 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.658313 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.658364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.658381 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.658404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.658422 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.761307 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.761379 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.761403 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.761432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.761456 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.863964 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.864354 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.864503 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.864649 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.864814 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.939460 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:16:24.183373751 +0000 UTC Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.968508 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.968880 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.969098 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.969402 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.969624 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.072852 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.072907 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.072923 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.072947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.072977 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.176277 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.176353 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.176389 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.176418 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.176440 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.280543 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.280615 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.280634 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.280658 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.280676 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.384025 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.384106 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.384132 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.384160 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.384180 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.487269 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.487331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.487348 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.487373 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.487388 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.590782 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.590858 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.590888 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.590911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.590928 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.694379 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.694456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.694472 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.694497 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.694515 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.797926 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.797995 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.798017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.798045 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.798066 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.849185 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.849344 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.849386 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.849423 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849482 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.849441123 +0000 UTC m=+147.145546386 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.849548 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849604 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849641 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849655 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849854 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849884 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849709 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.849676999 +0000 UTC m=+147.145782302 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849941 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849986 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.849959735 +0000 UTC m=+147.146065068 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849991 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.850027 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.850032 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.850014707 +0000 UTC m=+147.146120120 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.850123 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.850097549 +0000 UTC m=+147.146202822 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.904442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.904490 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.904501 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.904517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.904527 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.940401 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:03:34.298918683 +0000 UTC Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.941755 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.941769 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.941877 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.942117 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.942248 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.942056 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.942358 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.942461 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.008139 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.008190 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.008213 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.008233 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.008246 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.110656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.110697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.110710 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.110753 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.110769 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.213095 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.213148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.213164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.213185 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.213199 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.316831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.316871 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.316881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.316895 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.316910 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.419149 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.419226 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.419257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.419284 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.419348 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.522077 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.522235 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.522261 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.522292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.522316 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.614695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.614773 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.614789 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.614806 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.614819 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: E0124 06:54:42.633774 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.638267 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.638413 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.638504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.638600 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.638693 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: E0124 06:54:42.657352 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.662384 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.662557 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.662650 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.662774 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.662875 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: E0124 06:54:42.682174 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.686512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.686581 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.686599 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.686620 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.686636 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: E0124 06:54:42.700428 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.705875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.705949 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.705971 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.706000 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.706020 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: E0124 06:54:42.725144 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:42 crc kubenswrapper[4675]: E0124 06:54:42.725528 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.728057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.728110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.728133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.728161 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.728181 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.830810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.830916 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.830938 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.830969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.830994 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.932970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.933013 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.933023 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.933038 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.933048 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.941000 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:28:42.960103205 +0000 UTC Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.036442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.036495 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.036512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.036532 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.036552 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.138999 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.139078 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.139103 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.139135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.139160 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.241837 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.241900 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.241918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.241940 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.241957 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.345547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.345606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.345622 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.345643 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.345660 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.448845 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.448896 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.448908 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.448924 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.448937 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.552514 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.552575 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.552593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.552624 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.552642 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.655801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.656135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.656291 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.656438 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.656581 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.760008 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.760369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.760504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.760642 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.760862 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.864333 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.864390 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.864407 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.864430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.864446 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.941841 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.941883 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.941883 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.941900 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:54:35.275264497 +0000 UTC Jan 24 06:54:43 crc kubenswrapper[4675]: E0124 06:54:43.942042 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.942100 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:43 crc kubenswrapper[4675]: E0124 06:54:43.942334 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:43 crc kubenswrapper[4675]: E0124 06:54:43.942326 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:43 crc kubenswrapper[4675]: E0124 06:54:43.942469 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.967800 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.967850 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.967865 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.967881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.967892 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.071209 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.071486 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.071577 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.071663 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.071777 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.175105 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.175152 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.175168 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.175189 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.175205 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.317025 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.317056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.317064 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.317077 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.317087 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.420282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.420340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.420357 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.420381 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.420399 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.523327 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.523394 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.523418 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.523451 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.523473 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.626228 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.626295 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.626303 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.626338 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.626347 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.729474 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.729547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.729564 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.729590 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.729608 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.833526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.833577 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.833589 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.833610 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.833622 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.937353 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.937406 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.937417 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.937435 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.937448 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.942910 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:47:19.925760638 +0000 UTC Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.040064 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.040114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.040125 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.040140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.040151 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.143010 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.143260 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.143323 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.143417 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.143480 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.245277 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.245328 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.245344 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.245366 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.245380 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.348088 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.348142 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.348154 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.348171 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.348184 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.450840 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.450878 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.450885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.450898 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.450906 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.553394 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.553442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.553458 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.553475 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.553488 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.655870 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.655934 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.655957 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.655988 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.656011 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.759318 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.759369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.759386 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.759407 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.759423 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.862487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.862598 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.862619 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.862642 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.862659 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.941900 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.941971 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.942480 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.942544 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:45 crc kubenswrapper[4675]: E0124 06:54:45.942610 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:45 crc kubenswrapper[4675]: E0124 06:54:45.942709 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:45 crc kubenswrapper[4675]: E0124 06:54:45.942848 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:45 crc kubenswrapper[4675]: E0124 06:54:45.942905 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.943578 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:47:56.940609845 +0000 UTC Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.965376 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.965781 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.965957 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.966098 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.966236 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.069317 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.069375 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.069393 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.069414 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.069427 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.172914 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.173363 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.173445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.173569 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.173640 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.276947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.277018 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.277039 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.277062 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.277082 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.380541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.380587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.380597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.380614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.380624 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.483229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.483276 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.483284 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.483299 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.483309 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.587075 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.587124 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.587136 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.587151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.587163 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.690507 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.690563 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.690582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.690606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.690624 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.799657 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.799707 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.799755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.799786 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.799805 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.903050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.903112 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.903133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.903162 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.903182 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.944299 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:25:30.375443891 +0000 UTC Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.006058 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.006102 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.006114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.006129 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.006142 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.109018 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.109114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.109135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.109162 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.109183 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.211673 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.211704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.211733 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.211761 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.211771 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.313937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.313989 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.314001 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.314017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.314033 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.416246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.416286 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.416296 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.416314 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.416325 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.519278 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.519325 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.519336 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.519352 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.519365 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.621030 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.621069 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.621079 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.621091 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.621100 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.723273 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.723330 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.723346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.723369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.723386 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.826148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.826188 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.826197 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.826210 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.826218 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.929224 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.929260 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.929271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.929286 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.929309 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.941904 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.941932 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.941958 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:47 crc kubenswrapper[4675]: E0124 06:54:47.941993 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.942103 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:47 crc kubenswrapper[4675]: E0124 06:54:47.942098 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:47 crc kubenswrapper[4675]: E0124 06:54:47.942148 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:47 crc kubenswrapper[4675]: E0124 06:54:47.942188 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.945165 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:04:56.035279544 +0000 UTC Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.031848 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.031878 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.031889 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.031908 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.031920 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.134840 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.134866 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.134874 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.134887 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.134896 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.237925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.237974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.237991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.238014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.238032 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.341046 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.341118 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.341135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.341158 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.341175 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.444471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.444541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.444565 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.444591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.444610 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.547463 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.547704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.547804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.547879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.547952 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.651447 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.651514 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.651537 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.651566 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.651588 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.756099 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.756802 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.756829 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.756848 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.756859 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.859435 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.859469 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.859477 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.859491 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.859499 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.945683 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:15:38.49288908 +0000 UTC Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.962845 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.962907 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.962925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.962951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.962971 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.988039 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=67.988021363 podStartE2EDuration="1m7.988021363s" podCreationTimestamp="2026-01-24 06:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:48.987566073 +0000 UTC m=+90.283671296" watchObservedRunningTime="2026-01-24 06:54:48.988021363 +0000 UTC m=+90.284126586" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.065541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.065571 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.065582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.065596 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.065604 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.066748 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.066737362 podStartE2EDuration="1m12.066737362s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.066067616 +0000 UTC m=+90.362172859" watchObservedRunningTime="2026-01-24 06:54:49.066737362 +0000 UTC m=+90.362842585" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.066973 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zx9ns" podStartSLOduration=72.066968917 podStartE2EDuration="1m12.066968917s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.048664455 +0000 UTC m=+90.344769758" watchObservedRunningTime="2026-01-24 06:54:49.066968917 +0000 UTC m=+90.363074140" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.079322 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.079296718 podStartE2EDuration="25.079296718s" podCreationTimestamp="2026-01-24 06:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.079228147 +0000 UTC m=+90.375333370" watchObservedRunningTime="2026-01-24 06:54:49.079296718 +0000 UTC m=+90.375401961" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.096210 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.096191887 podStartE2EDuration="1m12.096191887s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.094094438 +0000 UTC m=+90.390199671" watchObservedRunningTime="2026-01-24 06:54:49.096191887 +0000 UTC m=+90.392297120" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.107050 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zbs9f" podStartSLOduration=72.107033514 podStartE2EDuration="1m12.107033514s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.106537142 +0000 UTC m=+90.402642375" watchObservedRunningTime="2026-01-24 06:54:49.107033514 +0000 UTC m=+90.403138737" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.155192 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" podStartSLOduration=71.1551738 podStartE2EDuration="1m11.1551738s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.155113869 +0000 UTC m=+90.451219092" watchObservedRunningTime="2026-01-24 06:54:49.1551738 +0000 UTC m=+90.451279023" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.167515 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.167573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.167587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.167606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.167617 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.248090 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podStartSLOduration=72.248074645 podStartE2EDuration="1m12.248074645s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.22632426 +0000 UTC m=+90.522429503" watchObservedRunningTime="2026-01-24 06:54:49.248074645 +0000 UTC m=+90.544179868" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.248218 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-797q5" podStartSLOduration=72.248213778 podStartE2EDuration="1m12.248213778s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.247690895 +0000 UTC m=+90.543796148" watchObservedRunningTime="2026-01-24 06:54:49.248213778 +0000 UTC m=+90.544319001" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.260558 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7rtdz" podStartSLOduration=72.260531139 podStartE2EDuration="1m12.260531139s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.259669438 +0000 UTC m=+90.555774661" watchObservedRunningTime="2026-01-24 06:54:49.260531139 +0000 UTC m=+90.556636402" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.269536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.269580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.269592 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.269609 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.269620 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.372192 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.372534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.372637 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.372777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.372864 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.475026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.475079 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.475091 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.475106 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.475117 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.577751 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.577791 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.577804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.577820 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.577832 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.680502 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.680556 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.680573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.680597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.680614 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.783024 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.783074 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.783089 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.783108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.783120 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.887433 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.887491 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.887501 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.887521 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.887531 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.942084 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.942374 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:49 crc kubenswrapper[4675]: E0124 06:54:49.942948 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.942471 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:49 crc kubenswrapper[4675]: E0124 06:54:49.943075 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.942462 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:49 crc kubenswrapper[4675]: E0124 06:54:49.942688 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:49 crc kubenswrapper[4675]: E0124 06:54:49.943180 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.946568 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:04:55.713081449 +0000 UTC Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.991060 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.991322 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.991467 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.991613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.991804 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.094178 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.094219 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.094229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.094246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.094258 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.197148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.197483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.197614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.197777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.197924 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.300794 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.301133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.301271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.301415 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.301543 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.404795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.405074 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.405157 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.405243 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.405321 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.507834 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.507885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.507901 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.507922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.507937 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.610960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.611300 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.611512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.611678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.611918 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.715839 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.716037 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.716074 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.716107 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.716127 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.818513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.818578 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.818591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.818608 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.818620 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.920865 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.920915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.920924 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.920940 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.920948 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.946806 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:57:20.304679494 +0000 UTC Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.023550 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.023594 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.023605 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.023618 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.023631 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.126203 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.126246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.126260 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.126279 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.126294 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.229518 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.229591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.229606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.229625 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.229638 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.332321 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.332368 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.332387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.332409 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.332425 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.434915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.434943 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.434952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.434965 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.434974 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.537988 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.538063 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.538087 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.538115 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.538141 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.640496 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.640559 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.640595 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.640624 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.640644 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.743461 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.743503 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.743512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.743527 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.743536 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.846133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.846162 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.846172 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.846184 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.846194 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.941602 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.941652 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.941639 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.941767 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:51 crc kubenswrapper[4675]: E0124 06:54:51.941881 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:51 crc kubenswrapper[4675]: E0124 06:54:51.942016 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:51 crc kubenswrapper[4675]: E0124 06:54:51.942196 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:51 crc kubenswrapper[4675]: E0124 06:54:51.942352 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.947062 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:21:55.341670605 +0000 UTC Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.949813 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.949868 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.949889 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.949918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.949942 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.053371 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.053428 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.053443 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.053469 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.053487 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.156704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.156803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.156821 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.156853 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.156887 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.260345 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.260403 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.260415 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.260438 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.260449 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.374275 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.374414 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.374427 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.374445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.374459 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.477815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.478000 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.478027 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.478057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.478081 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.582010 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.582092 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.582109 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.582134 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.582151 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.684928 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.684991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.685004 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.685021 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.685037 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.787896 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.787951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.787967 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.787991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.788007 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.891658 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.891966 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.892144 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.892303 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.892437 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.943137 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 06:54:52 crc kubenswrapper[4675]: E0124 06:54:52.943326 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.947472 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:47:06.194487128 +0000 UTC Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.995788 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.995836 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.995852 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.995875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.995893 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.066951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.067221 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.067326 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.067394 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.067469 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:53Z","lastTransitionTime":"2026-01-24T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.124101 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.124085298 podStartE2EDuration="45.124085298s" podCreationTimestamp="2026-01-24 06:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.284503395 +0000 UTC m=+90.580608618" watchObservedRunningTime="2026-01-24 06:54:53.124085298 +0000 UTC m=+94.420190521" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.124476 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs"] Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.124806 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.128041 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.128450 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.129261 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.133705 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.212995 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8305da25-aad5-435c-994d-00c2fc75ed74-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.213088 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8305da25-aad5-435c-994d-00c2fc75ed74-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.213133 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8305da25-aad5-435c-994d-00c2fc75ed74-service-ca\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.213225 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8305da25-aad5-435c-994d-00c2fc75ed74-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.213292 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8305da25-aad5-435c-994d-00c2fc75ed74-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314405 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8305da25-aad5-435c-994d-00c2fc75ed74-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314464 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8305da25-aad5-435c-994d-00c2fc75ed74-service-ca\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314501 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8305da25-aad5-435c-994d-00c2fc75ed74-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314510 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8305da25-aad5-435c-994d-00c2fc75ed74-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314591 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8305da25-aad5-435c-994d-00c2fc75ed74-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314659 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8305da25-aad5-435c-994d-00c2fc75ed74-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314883 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8305da25-aad5-435c-994d-00c2fc75ed74-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.316751 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8305da25-aad5-435c-994d-00c2fc75ed74-service-ca\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.324635 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8305da25-aad5-435c-994d-00c2fc75ed74-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.335571 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8305da25-aad5-435c-994d-00c2fc75ed74-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.440532 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.643169 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" event={"ID":"8305da25-aad5-435c-994d-00c2fc75ed74","Type":"ContainerStarted","Data":"3106dd92396404fd20cf5616dd67a83e0bc41d2bf11bc2b2b98acebc7ba1df99"} Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.643221 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" event={"ID":"8305da25-aad5-435c-994d-00c2fc75ed74","Type":"ContainerStarted","Data":"f54b4a95013ed9b88107852ffb3256e81ce6e9af7fa8b1c770a8cee29e28c9c1"} Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.942189 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.942232 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.942196 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.942186 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:53 crc kubenswrapper[4675]: E0124 06:54:53.942351 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:53 crc kubenswrapper[4675]: E0124 06:54:53.942455 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:53 crc kubenswrapper[4675]: E0124 06:54:53.942541 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:53 crc kubenswrapper[4675]: E0124 06:54:53.942626 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.947830 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 11:08:00.974415207 +0000 UTC Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.948290 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.954914 4675 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 24 06:54:55 crc kubenswrapper[4675]: I0124 06:54:55.942315 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:55 crc kubenswrapper[4675]: E0124 06:54:55.942450 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:55 crc kubenswrapper[4675]: I0124 06:54:55.942327 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:55 crc kubenswrapper[4675]: I0124 06:54:55.942312 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:55 crc kubenswrapper[4675]: E0124 06:54:55.942528 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:55 crc kubenswrapper[4675]: I0124 06:54:55.942336 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:55 crc kubenswrapper[4675]: E0124 06:54:55.942649 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:55 crc kubenswrapper[4675]: E0124 06:54:55.942745 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:56 crc kubenswrapper[4675]: I0124 06:54:56.041674 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:56 crc kubenswrapper[4675]: E0124 06:54:56.041948 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:54:56 crc kubenswrapper[4675]: E0124 06:54:56.042059 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:56:00.042030255 +0000 UTC m=+161.338135518 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:54:57 crc kubenswrapper[4675]: I0124 06:54:57.941632 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:57 crc kubenswrapper[4675]: I0124 06:54:57.941691 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:57 crc kubenswrapper[4675]: I0124 06:54:57.941632 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:57 crc kubenswrapper[4675]: I0124 06:54:57.941824 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:57 crc kubenswrapper[4675]: E0124 06:54:57.941969 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:57 crc kubenswrapper[4675]: E0124 06:54:57.942102 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:57 crc kubenswrapper[4675]: E0124 06:54:57.942215 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:57 crc kubenswrapper[4675]: E0124 06:54:57.942399 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:59 crc kubenswrapper[4675]: I0124 06:54:59.942554 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:59 crc kubenswrapper[4675]: E0124 06:54:59.942694 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:59 crc kubenswrapper[4675]: I0124 06:54:59.942895 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:59 crc kubenswrapper[4675]: I0124 06:54:59.942972 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:59 crc kubenswrapper[4675]: E0124 06:54:59.942999 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:59 crc kubenswrapper[4675]: E0124 06:54:59.943083 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:59 crc kubenswrapper[4675]: I0124 06:54:59.943097 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:59 crc kubenswrapper[4675]: E0124 06:54:59.943341 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:01 crc kubenswrapper[4675]: I0124 06:55:01.026871 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:01 crc kubenswrapper[4675]: E0124 06:55:01.027078 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:01 crc kubenswrapper[4675]: I0124 06:55:01.027194 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:01 crc kubenswrapper[4675]: E0124 06:55:01.027582 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:01 crc kubenswrapper[4675]: I0124 06:55:01.942049 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:01 crc kubenswrapper[4675]: E0124 06:55:01.942162 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:01 crc kubenswrapper[4675]: I0124 06:55:01.942053 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:01 crc kubenswrapper[4675]: E0124 06:55:01.942308 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:02 crc kubenswrapper[4675]: I0124 06:55:02.942286 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:02 crc kubenswrapper[4675]: E0124 06:55:02.942534 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:02 crc kubenswrapper[4675]: I0124 06:55:02.943000 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:02 crc kubenswrapper[4675]: E0124 06:55:02.943169 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:03 crc kubenswrapper[4675]: I0124 06:55:03.941492 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:03 crc kubenswrapper[4675]: I0124 06:55:03.941638 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:03 crc kubenswrapper[4675]: E0124 06:55:03.941745 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:03 crc kubenswrapper[4675]: E0124 06:55:03.942113 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:03 crc kubenswrapper[4675]: I0124 06:55:03.942460 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 06:55:03 crc kubenswrapper[4675]: E0124 06:55:03.942655 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:55:04 crc kubenswrapper[4675]: I0124 06:55:04.941431 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:04 crc kubenswrapper[4675]: E0124 06:55:04.941651 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:04 crc kubenswrapper[4675]: I0124 06:55:04.941709 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:04 crc kubenswrapper[4675]: E0124 06:55:04.941934 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:05 crc kubenswrapper[4675]: I0124 06:55:05.942522 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:05 crc kubenswrapper[4675]: I0124 06:55:05.942528 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:05 crc kubenswrapper[4675]: E0124 06:55:05.943715 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:05 crc kubenswrapper[4675]: E0124 06:55:05.943577 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:06 crc kubenswrapper[4675]: I0124 06:55:06.942173 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:06 crc kubenswrapper[4675]: I0124 06:55:06.942286 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:06 crc kubenswrapper[4675]: E0124 06:55:06.942492 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:06 crc kubenswrapper[4675]: E0124 06:55:06.942770 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:07 crc kubenswrapper[4675]: I0124 06:55:07.942381 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:07 crc kubenswrapper[4675]: I0124 06:55:07.942558 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:07 crc kubenswrapper[4675]: E0124 06:55:07.942805 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:07 crc kubenswrapper[4675]: E0124 06:55:07.942867 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:08 crc kubenswrapper[4675]: I0124 06:55:08.942517 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:08 crc kubenswrapper[4675]: I0124 06:55:08.942592 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:08 crc kubenswrapper[4675]: E0124 06:55:08.944716 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:08 crc kubenswrapper[4675]: E0124 06:55:08.944891 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:09 crc kubenswrapper[4675]: I0124 06:55:09.941925 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:09 crc kubenswrapper[4675]: I0124 06:55:09.941984 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:09 crc kubenswrapper[4675]: E0124 06:55:09.942090 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:09 crc kubenswrapper[4675]: E0124 06:55:09.942289 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:10 crc kubenswrapper[4675]: I0124 06:55:10.941878 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:10 crc kubenswrapper[4675]: I0124 06:55:10.941908 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:10 crc kubenswrapper[4675]: E0124 06:55:10.943084 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:10 crc kubenswrapper[4675]: E0124 06:55:10.943558 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.701645 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/1.log" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.702178 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/0.log" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.702225 4675 generic.go:334] "Generic (PLEG): container finished" podID="61e129ca-c9dc-4375-b373-5eec702744bd" containerID="6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe" exitCode=1 Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.702259 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerDied","Data":"6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe"} Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.702296 4675 scope.go:117] "RemoveContainer" containerID="6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.703010 4675 scope.go:117] "RemoveContainer" containerID="6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe" Jan 24 06:55:11 crc kubenswrapper[4675]: E0124 06:55:11.703209 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-zx9ns_openshift-multus(61e129ca-c9dc-4375-b373-5eec702744bd)\"" pod="openshift-multus/multus-zx9ns" podUID="61e129ca-c9dc-4375-b373-5eec702744bd" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.728292 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" podStartSLOduration=94.728270948 podStartE2EDuration="1m34.728270948s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:53.657264279 +0000 UTC m=+94.953369502" watchObservedRunningTime="2026-01-24 06:55:11.728270948 +0000 UTC m=+113.024376181" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.941925 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.941965 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:11 crc kubenswrapper[4675]: E0124 06:55:11.942069 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:11 crc kubenswrapper[4675]: E0124 06:55:11.942201 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:12 crc kubenswrapper[4675]: I0124 06:55:12.707092 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/1.log" Jan 24 06:55:12 crc kubenswrapper[4675]: I0124 06:55:12.941973 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:12 crc kubenswrapper[4675]: I0124 06:55:12.942143 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:12 crc kubenswrapper[4675]: E0124 06:55:12.942592 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:12 crc kubenswrapper[4675]: E0124 06:55:12.942690 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:13 crc kubenswrapper[4675]: I0124 06:55:13.941429 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:13 crc kubenswrapper[4675]: I0124 06:55:13.941429 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:13 crc kubenswrapper[4675]: E0124 06:55:13.941951 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:13 crc kubenswrapper[4675]: E0124 06:55:13.941853 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:14 crc kubenswrapper[4675]: I0124 06:55:14.942261 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:14 crc kubenswrapper[4675]: E0124 06:55:14.942504 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:14 crc kubenswrapper[4675]: I0124 06:55:14.942639 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:14 crc kubenswrapper[4675]: E0124 06:55:14.942783 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:14 crc kubenswrapper[4675]: I0124 06:55:14.943862 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 06:55:14 crc kubenswrapper[4675]: E0124 06:55:14.944033 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:55:15 crc kubenswrapper[4675]: I0124 06:55:15.942169 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:15 crc kubenswrapper[4675]: I0124 06:55:15.942208 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:15 crc kubenswrapper[4675]: E0124 06:55:15.942368 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:15 crc kubenswrapper[4675]: E0124 06:55:15.942456 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:16 crc kubenswrapper[4675]: I0124 06:55:16.942043 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:16 crc kubenswrapper[4675]: I0124 06:55:16.942062 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:16 crc kubenswrapper[4675]: E0124 06:55:16.942254 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:16 crc kubenswrapper[4675]: E0124 06:55:16.942350 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:17 crc kubenswrapper[4675]: I0124 06:55:17.941986 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:17 crc kubenswrapper[4675]: I0124 06:55:17.941986 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:17 crc kubenswrapper[4675]: E0124 06:55:17.942227 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:17 crc kubenswrapper[4675]: E0124 06:55:17.942392 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:18 crc kubenswrapper[4675]: I0124 06:55:18.942963 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:18 crc kubenswrapper[4675]: I0124 06:55:18.943147 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:18 crc kubenswrapper[4675]: E0124 06:55:18.943145 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:18 crc kubenswrapper[4675]: E0124 06:55:18.943240 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:18 crc kubenswrapper[4675]: E0124 06:55:18.982545 4675 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 24 06:55:19 crc kubenswrapper[4675]: E0124 06:55:19.029224 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 06:55:19 crc kubenswrapper[4675]: I0124 06:55:19.942163 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:19 crc kubenswrapper[4675]: E0124 06:55:19.942374 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:19 crc kubenswrapper[4675]: I0124 06:55:19.942172 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:19 crc kubenswrapper[4675]: E0124 06:55:19.942843 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:20 crc kubenswrapper[4675]: I0124 06:55:20.942119 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:20 crc kubenswrapper[4675]: I0124 06:55:20.942153 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:20 crc kubenswrapper[4675]: E0124 06:55:20.942279 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:20 crc kubenswrapper[4675]: E0124 06:55:20.942394 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:21 crc kubenswrapper[4675]: I0124 06:55:21.942248 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:21 crc kubenswrapper[4675]: I0124 06:55:21.942275 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:21 crc kubenswrapper[4675]: E0124 06:55:21.942493 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:21 crc kubenswrapper[4675]: E0124 06:55:21.942618 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:22 crc kubenswrapper[4675]: I0124 06:55:22.942327 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:22 crc kubenswrapper[4675]: E0124 06:55:22.942581 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:22 crc kubenswrapper[4675]: I0124 06:55:22.943159 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:22 crc kubenswrapper[4675]: E0124 06:55:22.943311 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:23 crc kubenswrapper[4675]: I0124 06:55:23.941857 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:23 crc kubenswrapper[4675]: E0124 06:55:23.942022 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:23 crc kubenswrapper[4675]: I0124 06:55:23.941861 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:23 crc kubenswrapper[4675]: E0124 06:55:23.942233 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:24 crc kubenswrapper[4675]: E0124 06:55:24.031116 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 06:55:24 crc kubenswrapper[4675]: I0124 06:55:24.941435 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:24 crc kubenswrapper[4675]: I0124 06:55:24.941524 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:24 crc kubenswrapper[4675]: E0124 06:55:24.941565 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:24 crc kubenswrapper[4675]: E0124 06:55:24.941671 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:25 crc kubenswrapper[4675]: I0124 06:55:25.942568 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:25 crc kubenswrapper[4675]: I0124 06:55:25.942605 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:25 crc kubenswrapper[4675]: E0124 06:55:25.942748 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:25 crc kubenswrapper[4675]: E0124 06:55:25.942955 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:25 crc kubenswrapper[4675]: I0124 06:55:25.943927 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.760699 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/3.log" Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.763273 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.764262 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.772619 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8mdgj"] Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.772758 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:26 crc kubenswrapper[4675]: E0124 06:55:26.772864 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.800973 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podStartSLOduration=109.800956345 podStartE2EDuration="1m49.800956345s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:26.800781431 +0000 UTC m=+128.096886654" watchObservedRunningTime="2026-01-24 06:55:26.800956345 +0000 UTC m=+128.097061558" Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.944438 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:26 crc kubenswrapper[4675]: E0124 06:55:26.944564 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.945363 4675 scope.go:117] "RemoveContainer" containerID="6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe" Jan 24 06:55:27 crc kubenswrapper[4675]: I0124 06:55:27.768615 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/1.log" Jan 24 06:55:27 crc kubenswrapper[4675]: I0124 06:55:27.768696 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerStarted","Data":"c0cb9a228a110e81324f7b918e71c835eddfd7522602f0110befbb680a1b112b"} Jan 24 06:55:27 crc kubenswrapper[4675]: I0124 06:55:27.942027 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:27 crc kubenswrapper[4675]: I0124 06:55:27.942135 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:27 crc kubenswrapper[4675]: E0124 06:55:27.942271 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:27 crc kubenswrapper[4675]: E0124 06:55:27.942374 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:28 crc kubenswrapper[4675]: I0124 06:55:28.942071 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:28 crc kubenswrapper[4675]: I0124 06:55:28.942090 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:28 crc kubenswrapper[4675]: E0124 06:55:28.943244 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:28 crc kubenswrapper[4675]: E0124 06:55:28.943493 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:29 crc kubenswrapper[4675]: E0124 06:55:29.032866 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 06:55:29 crc kubenswrapper[4675]: I0124 06:55:29.942295 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:29 crc kubenswrapper[4675]: I0124 06:55:29.942355 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:29 crc kubenswrapper[4675]: E0124 06:55:29.942450 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:29 crc kubenswrapper[4675]: E0124 06:55:29.942537 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:30 crc kubenswrapper[4675]: I0124 06:55:30.942501 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:30 crc kubenswrapper[4675]: I0124 06:55:30.942662 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:30 crc kubenswrapper[4675]: E0124 06:55:30.942782 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:30 crc kubenswrapper[4675]: E0124 06:55:30.942855 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:31 crc kubenswrapper[4675]: I0124 06:55:31.942331 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:31 crc kubenswrapper[4675]: I0124 06:55:31.942340 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:31 crc kubenswrapper[4675]: E0124 06:55:31.942534 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:31 crc kubenswrapper[4675]: E0124 06:55:31.942649 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:32 crc kubenswrapper[4675]: I0124 06:55:32.941673 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:32 crc kubenswrapper[4675]: I0124 06:55:32.941697 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:32 crc kubenswrapper[4675]: E0124 06:55:32.941975 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:32 crc kubenswrapper[4675]: E0124 06:55:32.942141 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:33 crc kubenswrapper[4675]: I0124 06:55:33.942186 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:33 crc kubenswrapper[4675]: I0124 06:55:33.942274 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:33 crc kubenswrapper[4675]: E0124 06:55:33.942468 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:33 crc kubenswrapper[4675]: E0124 06:55:33.942655 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.486620 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.538236 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cnnh9"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.538815 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.539594 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.541597 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.547342 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.551861 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.551865 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.552615 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.552631 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.554134 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.555319 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kz26"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.555755 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.556763 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tgs5c"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.557356 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.558631 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.559025 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.560146 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.560707 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.561064 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.561129 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.561502 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.561649 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-s7phr"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.561761 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.561792 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.562512 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.562660 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.571397 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.572047 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.580296 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.580589 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.580739 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.580883 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.583703 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xwk6j"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.584193 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.589280 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.590492 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.592596 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593082 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593343 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593367 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593571 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593593 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593702 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593766 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593832 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593871 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593893 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593929 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593700 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594025 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594056 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594072 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594126 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594168 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594192 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594030 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.597447 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.597554 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.597688 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.597769 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.597828 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.597914 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.598238 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.598338 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.599114 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.599125 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.602933 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.605705 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-c64jl"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.606085 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.606218 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.608747 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.609370 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.609595 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ntpw9"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.610063 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.611028 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m9tnc"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.611371 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.616483 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.636871 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.637204 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.637449 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.637670 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.637849 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.638061 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.640101 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.640398 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.641360 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.643075 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.643219 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.640399 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.644867 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.645367 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.672706 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.672830 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.672995 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673055 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673597 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673689 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673770 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673868 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673929 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673990 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.675976 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.676269 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.676404 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.676792 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.676974 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.678204 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.679311 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.677057 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.677568 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.678406 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.680438 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jntdn"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.681088 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.681419 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-577lm"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.682108 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.683937 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.684438 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.688157 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.688225 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.690025 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.691046 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.691674 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x7fgx"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.691827 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.692144 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.692163 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.693499 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.693653 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.693686 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.693798 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.693945 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.693959 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694054 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694143 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694327 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694442 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694514 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694596 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694813 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.695668 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.695850 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.697163 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.702698 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kz26"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.704403 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tgs5c"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.709028 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.709465 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.711141 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.716873 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.717377 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.739246 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qkls6"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.740435 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742780 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41fa7730-1346-4cd6-bb9a-b93bb377047d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742832 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l58kz\" (UniqueName: \"kubernetes.io/projected/41fa7730-1346-4cd6-bb9a-b93bb377047d-kube-api-access-l58kz\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742878 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f25231da-465a-433f-9b0b-e37a23bc59b8-serving-cert\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742910 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-audit-policies\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742938 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742960 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742989 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-config\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743030 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-client\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743052 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-default-certificate\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743082 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743110 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95m25\" (UniqueName: \"kubernetes.io/projected/c66b0b0f-0581-49e6-bfa7-548678ab6de8-kube-api-access-95m25\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743141 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4p2m\" (UniqueName: \"kubernetes.io/projected/a446c38f-dc5a-4a87-ba82-3405c0aadae7-kube-api-access-p4p2m\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743165 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743194 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsdj4\" (UniqueName: \"kubernetes.io/projected/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-kube-api-access-fsdj4\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743224 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743253 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-oauth-serving-cert\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743282 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743305 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-config\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743336 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9pvc\" (UniqueName: \"kubernetes.io/projected/00c16501-712c-4b60-a231-2a64e34ba677-kube-api-access-z9pvc\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743365 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c558c558-e8e3-4914-b88e-f5299916978f-serving-cert\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743392 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-service-ca\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743417 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da81853-6557-4653-87f8-c2423aeb3994-config\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743446 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743477 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4b956c8c-f12f-4622-b67d-29349ba463aa-node-pullsecrets\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743517 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a446c38f-dc5a-4a87-ba82-3405c0aadae7-metrics-tls\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743546 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-image-import-ca\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743574 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00c16501-712c-4b60-a231-2a64e34ba677-audit-dir\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743615 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743642 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-trusted-ca-bundle\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743667 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c558c558-e8e3-4914-b88e-f5299916978f-trusted-ca\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743692 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-serving-cert\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743744 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a446c38f-dc5a-4a87-ba82-3405c0aadae7-trusted-ca\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743769 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-etcd-client\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743807 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-client-ca\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743831 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9snnx\" (UniqueName: \"kubernetes.io/projected/737c0ee8-629a-4935-8357-c321e1ff5a41-kube-api-access-9snnx\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743861 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvzrc\" (UniqueName: \"kubernetes.io/projected/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-kube-api-access-xvzrc\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743891 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a446c38f-dc5a-4a87-ba82-3405c0aadae7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743918 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-config\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743942 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/927fb92f-2e72-4344-90e4-cfc0357135f4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r54lt\" (UID: \"927fb92f-2e72-4344-90e4-cfc0357135f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743970 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744000 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744029 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744058 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-serving-cert\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744084 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-encryption-config\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744117 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744184 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-stats-auth\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744207 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-oauth-config\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744232 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/737c0ee8-629a-4935-8357-c321e1ff5a41-service-ca-bundle\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744297 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744319 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sldkd\" (UniqueName: \"kubernetes.io/projected/927fb92f-2e72-4344-90e4-cfc0357135f4-kube-api-access-sldkd\") pod \"cluster-samples-operator-665b6dd947-r54lt\" (UID: \"927fb92f-2e72-4344-90e4-cfc0357135f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744347 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64c2d\" (UniqueName: \"kubernetes.io/projected/f25231da-465a-433f-9b0b-e37a23bc59b8-kube-api-access-64c2d\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744377 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-config\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744395 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744419 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-ca\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744438 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3da81853-6557-4653-87f8-c2423aeb3994-machine-approver-tls\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744455 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-etcd-serving-ca\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744477 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxg8z\" (UniqueName: \"kubernetes.io/projected/4b956c8c-f12f-4622-b67d-29349ba463aa-kube-api-access-cxg8z\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744493 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-service-ca\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744510 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55gd7\" (UniqueName: \"kubernetes.io/projected/3da81853-6557-4653-87f8-c2423aeb3994-kube-api-access-55gd7\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744524 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-metrics-certs\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744540 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-audit\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744559 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcc55\" (UniqueName: \"kubernetes.io/projected/c558c558-e8e3-4914-b88e-f5299916978f-kube-api-access-qcc55\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744592 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe-metrics-tls\") pod \"dns-operator-744455d44c-6kz26\" (UID: \"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744610 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p28rq\" (UniqueName: \"kubernetes.io/projected/3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe-kube-api-access-p28rq\") pod \"dns-operator-744455d44c-6kz26\" (UID: \"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744628 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744644 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41fa7730-1346-4cd6-bb9a-b93bb377047d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744660 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-serving-cert\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744818 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c558c558-e8e3-4914-b88e-f5299916978f-config\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744872 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3da81853-6557-4653-87f8-c2423aeb3994-auth-proxy-config\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744909 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744947 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b956c8c-f12f-4622-b67d-29349ba463aa-audit-dir\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.747999 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.748200 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.749102 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.753957 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.754713 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.755059 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.760783 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rmdpv"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.761534 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.762260 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.762846 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.763361 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.772710 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.773354 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.776825 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.777497 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f895q"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.777955 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.778145 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.778207 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.783731 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.786436 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.786619 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.787419 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.787781 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.787995 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.789594 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.791399 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.791838 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.792345 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.792831 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.792925 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.792952 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.792966 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.800927 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.801109 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.801624 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.802111 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.805237 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-s7phr"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.805263 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.805274 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgv9v"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.805786 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.807917 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.808382 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-577lm"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.809349 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.811126 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ntpw9"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.813388 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.813514 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.819077 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.820801 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x7fgx"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.821700 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c64jl"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.823168 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.823376 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.823960 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m9tnc"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.824979 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qkls6"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.825990 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.827326 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.830136 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jntdn"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.831510 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cnnh9"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.832617 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9zjhs"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.833370 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.833952 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.835959 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.840688 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.843500 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.844531 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.849074 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.853391 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bba258ca-d05a-417e-8a91-73e603062c20-images\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.853887 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffc3478-ba63-46d9-a78d-728fd442a3a2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.853931 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pghrz\" (UniqueName: \"kubernetes.io/projected/bba258ca-d05a-417e-8a91-73e603062c20-kube-api-access-pghrz\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.853964 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b49v\" (UniqueName: \"kubernetes.io/projected/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-kube-api-access-7b49v\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.853991 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7266e9dc-a776-4331-a8d5-324deae3a589-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854030 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/737c0ee8-629a-4935-8357-c321e1ff5a41-service-ca-bundle\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854122 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854169 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfb9d06-165a-4595-9422-d6b22e311ec2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854238 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64c2d\" (UniqueName: \"kubernetes.io/projected/f25231da-465a-433f-9b0b-e37a23bc59b8-kube-api-access-64c2d\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854272 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlqvq\" (UniqueName: \"kubernetes.io/projected/46902882-1cf1-4d7d-aa61-4502520d171f-kube-api-access-hlqvq\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854327 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb44d\" (UniqueName: \"kubernetes.io/projected/1ffc3478-ba63-46d9-a78d-728fd442a3a2-kube-api-access-gb44d\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854359 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdk9l\" (UniqueName: \"kubernetes.io/projected/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-kube-api-access-xdk9l\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854393 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-ca\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854415 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-etcd-serving-ca\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854441 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxg8z\" (UniqueName: \"kubernetes.io/projected/4b956c8c-f12f-4622-b67d-29349ba463aa-kube-api-access-cxg8z\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854481 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854647 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55gd7\" (UniqueName: \"kubernetes.io/projected/3da81853-6557-4653-87f8-c2423aeb3994-kube-api-access-55gd7\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854769 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p28rq\" (UniqueName: \"kubernetes.io/projected/3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe-kube-api-access-p28rq\") pod \"dns-operator-744455d44c-6kz26\" (UID: \"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854854 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854900 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcc55\" (UniqueName: \"kubernetes.io/projected/c558c558-e8e3-4914-b88e-f5299916978f-kube-api-access-qcc55\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854955 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-images\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.855016 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-serving-cert\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.855049 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b664684-ced7-4027-8050-6da6e83d0fd7-serving-cert\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.855128 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b956c8c-f12f-4622-b67d-29349ba463aa-audit-dir\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.855805 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/737c0ee8-629a-4935-8357-c321e1ff5a41-service-ca-bundle\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.856457 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f895q"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.856492 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.858431 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.858675 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b956c8c-f12f-4622-b67d-29349ba463aa-audit-dir\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.855341 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.859140 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l58kz\" (UniqueName: \"kubernetes.io/projected/41fa7730-1346-4cd6-bb9a-b93bb377047d-kube-api-access-l58kz\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.859335 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-etcd-serving-ca\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.859381 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-encryption-config\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.859458 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41fa7730-1346-4cd6-bb9a-b93bb377047d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.860950 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnrwb\" (UniqueName: \"kubernetes.io/projected/4bfd659c-336a-4497-bb5b-eaf18b1118e3-kube-api-access-fnrwb\") pod \"downloads-7954f5f757-jntdn\" (UID: \"4bfd659c-336a-4497-bb5b-eaf18b1118e3\") " pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.860997 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-audit-policies\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.861039 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7gw\" (UniqueName: \"kubernetes.io/projected/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-kube-api-access-4w7gw\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.861126 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-default-certificate\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.861830 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-audit-policies\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862175 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41fa7730-1346-4cd6-bb9a-b93bb377047d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862326 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862495 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4p2m\" (UniqueName: \"kubernetes.io/projected/a446c38f-dc5a-4a87-ba82-3405c0aadae7-kube-api-access-p4p2m\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862590 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862785 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsdj4\" (UniqueName: \"kubernetes.io/projected/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-kube-api-access-fsdj4\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862888 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862968 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863126 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95m25\" (UniqueName: \"kubernetes.io/projected/c66b0b0f-0581-49e6-bfa7-548678ab6de8-kube-api-access-95m25\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863515 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-service-ca-bundle\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863587 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863635 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-oauth-serving-cert\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863670 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-config\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863703 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9pvc\" (UniqueName: \"kubernetes.io/projected/00c16501-712c-4b60-a231-2a64e34ba677-kube-api-access-z9pvc\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863872 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.865304 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7266e9dc-a776-4331-a8d5-324deae3a589-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.865394 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c558c558-e8e3-4914-b88e-f5299916978f-serving-cert\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.865488 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79a88693-6b36-417b-829e-50981ccff9f7-config\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.867253 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.867813 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-oauth-serving-cert\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.865922 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-service-ca\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868413 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868470 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-signing-key\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868496 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4b956c8c-f12f-4622-b67d-29349ba463aa-node-pullsecrets\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868539 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-serving-cert\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868563 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868585 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00c16501-712c-4b60-a231-2a64e34ba677-audit-dir\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868644 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79a88693-6b36-417b-829e-50981ccff9f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868663 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a446c38f-dc5a-4a87-ba82-3405c0aadae7-trusted-ca\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868711 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-serving-cert\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868786 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-config\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868806 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-config\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868858 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/927fb92f-2e72-4344-90e4-cfc0357135f4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r54lt\" (UID: \"927fb92f-2e72-4344-90e4-cfc0357135f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868895 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bba258ca-d05a-417e-8a91-73e603062c20-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868920 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffc3478-ba63-46d9-a78d-728fd442a3a2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868949 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-etcd-client\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868982 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-serving-cert\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869007 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869025 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbfb9d06-165a-4595-9422-d6b22e311ec2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869044 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7266e9dc-a776-4331-a8d5-324deae3a589-config\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869068 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-encryption-config\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869094 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-stats-auth\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869112 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-oauth-config\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869133 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-config\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869155 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869181 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-proxy-tls\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869207 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869311 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sldkd\" (UniqueName: \"kubernetes.io/projected/927fb92f-2e72-4344-90e4-cfc0357135f4-kube-api-access-sldkd\") pod \"cluster-samples-operator-665b6dd947-r54lt\" (UID: \"927fb92f-2e72-4344-90e4-cfc0357135f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869342 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-config\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869371 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869392 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46902882-1cf1-4d7d-aa61-4502520d171f-proxy-tls\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869417 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3da81853-6557-4653-87f8-c2423aeb3994-machine-approver-tls\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869437 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-service-ca\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869460 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-audit\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869484 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-metrics-certs\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869511 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe-metrics-tls\") pod \"dns-operator-744455d44c-6kz26\" (UID: \"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869533 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41fa7730-1346-4cd6-bb9a-b93bb377047d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869549 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpxn7\" (UniqueName: \"kubernetes.io/projected/0b664684-ced7-4027-8050-6da6e83d0fd7-kube-api-access-gpxn7\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869571 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869592 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46902882-1cf1-4d7d-aa61-4502520d171f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869614 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c558c558-e8e3-4914-b88e-f5299916978f-config\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869631 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869652 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-serving-cert\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869701 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3da81853-6557-4653-87f8-c2423aeb3994-auth-proxy-config\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869738 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f25231da-465a-433f-9b0b-e37a23bc59b8-serving-cert\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869760 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869774 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869792 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bba258ca-d05a-417e-8a91-73e603062c20-config\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869812 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869833 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-config\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869850 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-client-ca\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869874 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-client\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869898 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79a88693-6b36-417b-829e-50981ccff9f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869920 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869939 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8817d706-baea-4924-868d-c656652d9111-audit-dir\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869956 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-audit-policies\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869982 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vngs\" (UniqueName: \"kubernetes.io/projected/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-kube-api-access-2vngs\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870005 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da81853-6557-4653-87f8-c2423aeb3994-config\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870027 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbfb9d06-165a-4595-9422-d6b22e311ec2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870077 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a446c38f-dc5a-4a87-ba82-3405c0aadae7-metrics-tls\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870098 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-image-import-ca\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870116 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870134 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-signing-cabundle\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870169 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870188 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-trusted-ca-bundle\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870206 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c558c558-e8e3-4914-b88e-f5299916978f-trusted-ca\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870227 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-etcd-client\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870242 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-client-ca\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870262 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a446c38f-dc5a-4a87-ba82-3405c0aadae7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870320 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9snnx\" (UniqueName: \"kubernetes.io/projected/737c0ee8-629a-4935-8357-c321e1ff5a41-kube-api-access-9snnx\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870340 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvzrc\" (UniqueName: \"kubernetes.io/projected/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-kube-api-access-xvzrc\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870383 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgcnb\" (UniqueName: \"kubernetes.io/projected/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-kube-api-access-zgcnb\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870411 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870454 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870481 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptc9p\" (UniqueName: \"kubernetes.io/projected/8817d706-baea-4924-868d-c656652d9111-kube-api-access-ptc9p\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870520 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-serving-cert\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.871175 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4b956c8c-f12f-4622-b67d-29349ba463aa-node-pullsecrets\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.871232 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00c16501-712c-4b60-a231-2a64e34ba677-audit-dir\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.872100 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-default-certificate\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.873210 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.874042 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a446c38f-dc5a-4a87-ba82-3405c0aadae7-trusted-ca\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.874562 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.874597 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-config\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.874914 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-config\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.875266 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-service-ca\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.878126 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-serving-cert\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.878344 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/927fb92f-2e72-4344-90e4-cfc0357135f4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r54lt\" (UID: \"927fb92f-2e72-4344-90e4-cfc0357135f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.878531 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-serving-cert\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.888806 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-audit\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.889283 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-trusted-ca-bundle\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.889342 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.889699 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-encryption-config\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.889862 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rmdpv"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.891031 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-ca\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.891091 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.891239 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f25231da-465a-433f-9b0b-e37a23bc59b8-serving-cert\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.891350 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41fa7730-1346-4cd6-bb9a-b93bb377047d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.891549 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-metrics-certs\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.891689 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-serving-cert\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.892006 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.892024 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe-metrics-tls\") pod \"dns-operator-744455d44c-6kz26\" (UID: \"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.892210 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.892658 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da81853-6557-4653-87f8-c2423aeb3994-config\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.892276 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c558c558-e8e3-4914-b88e-f5299916978f-serving-cert\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.893825 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-image-import-ca\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.894113 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-client-ca\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.894436 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-stats-auth\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.894998 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.895053 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.895148 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.895405 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3da81853-6557-4653-87f8-c2423aeb3994-auth-proxy-config\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.896067 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a446c38f-dc5a-4a87-ba82-3405c0aadae7-metrics-tls\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.896662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-config\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.897546 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-client\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.898702 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c558c558-e8e3-4914-b88e-f5299916978f-trusted-ca\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.898956 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.899297 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.899514 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-service-ca\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.899837 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-config\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.899944 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.900377 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.900543 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.900810 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3da81853-6557-4653-87f8-c2423aeb3994-machine-approver-tls\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.901490 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.901768 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.901811 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-oauth-config\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.901915 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c558c558-e8e3-4914-b88e-f5299916978f-config\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.902461 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-etcd-client\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.903037 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-m8cvs"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.903478 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.903783 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.905321 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgv9v"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.905691 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.906893 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.907900 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pn69w"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.908786 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.908970 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m8cvs"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.910125 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pn69w"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.911132 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-24zjn"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.911508 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.912614 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-24zjn"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.920776 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.941421 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.941587 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.941633 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.961293 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971351 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-encryption-config\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971416 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnrwb\" (UniqueName: \"kubernetes.io/projected/4bfd659c-336a-4497-bb5b-eaf18b1118e3-kube-api-access-fnrwb\") pod \"downloads-7954f5f757-jntdn\" (UID: \"4bfd659c-336a-4497-bb5b-eaf18b1118e3\") " pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971452 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w7gw\" (UniqueName: \"kubernetes.io/projected/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-kube-api-access-4w7gw\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971576 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-service-ca-bundle\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971623 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7266e9dc-a776-4331-a8d5-324deae3a589-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971655 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79a88693-6b36-417b-829e-50981ccff9f7-config\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971733 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-signing-key\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971759 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-serving-cert\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971779 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971822 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79a88693-6b36-417b-829e-50981ccff9f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971869 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bba258ca-d05a-417e-8a91-73e603062c20-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971893 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-config\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971914 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffc3478-ba63-46d9-a78d-728fd442a3a2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971937 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbfb9d06-165a-4595-9422-d6b22e311ec2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971958 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-etcd-client\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971981 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-serving-cert\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972004 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7266e9dc-a776-4331-a8d5-324deae3a589-config\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972026 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-config\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972062 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-proxy-tls\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972087 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46902882-1cf1-4d7d-aa61-4502520d171f-proxy-tls\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972113 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46902882-1cf1-4d7d-aa61-4502520d171f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972135 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972159 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpxn7\" (UniqueName: \"kubernetes.io/projected/0b664684-ced7-4027-8050-6da6e83d0fd7-kube-api-access-gpxn7\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972181 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972204 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-serving-cert\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972225 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bba258ca-d05a-417e-8a91-73e603062c20-config\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972245 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972265 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-client-ca\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972287 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79a88693-6b36-417b-829e-50981ccff9f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972308 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8817d706-baea-4924-868d-c656652d9111-audit-dir\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972329 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-audit-policies\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972337 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972357 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vngs\" (UniqueName: \"kubernetes.io/projected/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-kube-api-access-2vngs\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972417 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbfb9d06-165a-4595-9422-d6b22e311ec2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972439 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972454 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-signing-cabundle\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972517 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgcnb\" (UniqueName: \"kubernetes.io/projected/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-kube-api-access-zgcnb\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972537 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptc9p\" (UniqueName: \"kubernetes.io/projected/8817d706-baea-4924-868d-c656652d9111-kube-api-access-ptc9p\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972553 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bba258ca-d05a-417e-8a91-73e603062c20-images\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972587 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pghrz\" (UniqueName: \"kubernetes.io/projected/bba258ca-d05a-417e-8a91-73e603062c20-kube-api-access-pghrz\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972603 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b49v\" (UniqueName: \"kubernetes.io/projected/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-kube-api-access-7b49v\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972618 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7266e9dc-a776-4331-a8d5-324deae3a589-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972671 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffc3478-ba63-46d9-a78d-728fd442a3a2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972697 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfb9d06-165a-4595-9422-d6b22e311ec2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972754 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972777 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlqvq\" (UniqueName: \"kubernetes.io/projected/46902882-1cf1-4d7d-aa61-4502520d171f-kube-api-access-hlqvq\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972794 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb44d\" (UniqueName: \"kubernetes.io/projected/1ffc3478-ba63-46d9-a78d-728fd442a3a2-kube-api-access-gb44d\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972833 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdk9l\" (UniqueName: \"kubernetes.io/projected/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-kube-api-access-xdk9l\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972861 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972889 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-images\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972933 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b664684-ced7-4027-8050-6da6e83d0fd7-serving-cert\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.974338 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-serving-cert\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.974407 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.974447 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bba258ca-d05a-417e-8a91-73e603062c20-images\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.974899 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8817d706-baea-4924-868d-c656652d9111-audit-dir\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.975156 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-config\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.975159 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.975310 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bba258ca-d05a-417e-8a91-73e603062c20-config\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.975362 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffc3478-ba63-46d9-a78d-728fd442a3a2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.975919 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.976177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-audit-policies\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.976627 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bba258ca-d05a-417e-8a91-73e603062c20-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.976841 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffc3478-ba63-46d9-a78d-728fd442a3a2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.977069 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46902882-1cf1-4d7d-aa61-4502520d171f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.977487 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-serving-cert\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.981464 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-encryption-config\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.983105 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-etcd-client\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.989367 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.994130 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.000667 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.002321 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-service-ca-bundle\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.021149 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.040934 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.045889 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b664684-ced7-4027-8050-6da6e83d0fd7-serving-cert\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.060465 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.102018 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.120818 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.127184 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79a88693-6b36-417b-829e-50981ccff9f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.140382 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.161196 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.163430 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79a88693-6b36-417b-829e-50981ccff9f7-config\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.180290 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.200618 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.220920 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.227753 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfb9d06-165a-4595-9422-d6b22e311ec2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.240854 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.260698 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.264860 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbfb9d06-165a-4595-9422-d6b22e311ec2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.282308 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.301047 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.320378 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.341353 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.361955 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.380445 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.389356 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-serving-cert\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.401668 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.406662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-config\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.421461 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.425964 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-client-ca\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.446175 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.461525 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.481561 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.487292 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7266e9dc-a776-4331-a8d5-324deae3a589-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.501755 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.507041 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7266e9dc-a776-4331-a8d5-324deae3a589-config\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.521597 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.541316 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.561689 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.582521 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.602045 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.609508 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.621439 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.625452 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.641623 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.661994 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.681495 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.691440 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46902882-1cf1-4d7d-aa61-4502520d171f-proxy-tls\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.701967 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.720642 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.726489 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-images\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.740983 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.761674 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.766417 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-signing-key\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.779116 4675 request.go:700] Waited for 1.000567456s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.781696 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.801677 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.804246 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-signing-cabundle\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.820912 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.840789 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.847785 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-proxy-tls\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.881903 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.901942 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.921452 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.940616 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.941577 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.941667 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.960938 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.981154 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.000785 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.021510 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.041356 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.060661 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.080787 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.101759 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.122385 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.141388 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.161610 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.181956 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.201144 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.221972 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.242011 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.261438 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.284779 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.301008 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.331773 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.340942 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.361114 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.381270 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.401086 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.420797 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.461338 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxg8z\" (UniqueName: \"kubernetes.io/projected/4b956c8c-f12f-4622-b67d-29349ba463aa-kube-api-access-cxg8z\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.490601 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55gd7\" (UniqueName: \"kubernetes.io/projected/3da81853-6557-4653-87f8-c2423aeb3994-kube-api-access-55gd7\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.501124 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64c2d\" (UniqueName: \"kubernetes.io/projected/f25231da-465a-433f-9b0b-e37a23bc59b8-kube-api-access-64c2d\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.508055 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.519709 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p28rq\" (UniqueName: \"kubernetes.io/projected/3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe-kube-api-access-p28rq\") pod \"dns-operator-744455d44c-6kz26\" (UID: \"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.534360 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcc55\" (UniqueName: \"kubernetes.io/projected/c558c558-e8e3-4914-b88e-f5299916978f-kube-api-access-qcc55\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.554803 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l58kz\" (UniqueName: \"kubernetes.io/projected/41fa7730-1346-4cd6-bb9a-b93bb377047d-kube-api-access-l58kz\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.576943 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.584929 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4p2m\" (UniqueName: \"kubernetes.io/projected/a446c38f-dc5a-4a87-ba82-3405c0aadae7-kube-api-access-p4p2m\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.611561 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsdj4\" (UniqueName: \"kubernetes.io/projected/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-kube-api-access-fsdj4\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.666201 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.671204 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.675440 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.684260 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95m25\" (UniqueName: \"kubernetes.io/projected/c66b0b0f-0581-49e6-bfa7-548678ab6de8-kube-api-access-95m25\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.688796 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sldkd\" (UniqueName: \"kubernetes.io/projected/927fb92f-2e72-4344-90e4-cfc0357135f4-kube-api-access-sldkd\") pod \"cluster-samples-operator-665b6dd947-r54lt\" (UID: \"927fb92f-2e72-4344-90e4-cfc0357135f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.690892 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9pvc\" (UniqueName: \"kubernetes.io/projected/00c16501-712c-4b60-a231-2a64e34ba677-kube-api-access-z9pvc\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.697339 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a446c38f-dc5a-4a87-ba82-3405c0aadae7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.699435 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.717196 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvzrc\" (UniqueName: \"kubernetes.io/projected/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-kube-api-access-xvzrc\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.728006 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.738378 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9snnx\" (UniqueName: \"kubernetes.io/projected/737c0ee8-629a-4935-8357-c321e1ff5a41-kube-api-access-9snnx\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.742906 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.746091 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.764693 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.779474 4675 request.go:700] Waited for 1.875246936s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0 Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.782774 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.787271 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.800587 4675 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.821629 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" event={"ID":"3da81853-6557-4653-87f8-c2423aeb3994","Type":"ContainerStarted","Data":"0a2ca7386eb6561b8002b4357ae9c07c7c6349ff30e417ea7491f45e6bde772f"} Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.821889 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.841214 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.845581 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.860654 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.885819 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.901637 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.908191 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.913795 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.920778 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.926917 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.957437 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.964002 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.993936 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:36.998096 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.014050 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.041567 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.051408 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnrwb\" (UniqueName: \"kubernetes.io/projected/4bfd659c-336a-4497-bb5b-eaf18b1118e3-kube-api-access-fnrwb\") pod \"downloads-7954f5f757-jntdn\" (UID: \"4bfd659c-336a-4497-bb5b-eaf18b1118e3\") " pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.073549 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w7gw\" (UniqueName: \"kubernetes.io/projected/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-kube-api-access-4w7gw\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.074704 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-s7phr"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.101837 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7266e9dc-a776-4331-a8d5-324deae3a589-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.102399 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m9tnc"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.108648 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vngs\" (UniqueName: \"kubernetes.io/projected/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-kube-api-access-2vngs\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.110415 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.119448 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b49v\" (UniqueName: \"kubernetes.io/projected/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-kube-api-access-7b49v\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:37 crc kubenswrapper[4675]: W0124 06:55:37.119587 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b956c8c_f12f_4622_b67d_29349ba463aa.slice/crio-34b9e256f0a363d6706552e34a6c518bfd3c5bc88908cd5eba01b6ceec6a3fd8 WatchSource:0}: Error finding container 34b9e256f0a363d6706552e34a6c518bfd3c5bc88908cd5eba01b6ceec6a3fd8: Status 404 returned error can't find the container with id 34b9e256f0a363d6706552e34a6c518bfd3c5bc88908cd5eba01b6ceec6a3fd8 Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.136326 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tgs5c"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.144636 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgcnb\" (UniqueName: \"kubernetes.io/projected/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-kube-api-access-zgcnb\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.147447 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kz26"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.148536 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.155506 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.161340 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptc9p\" (UniqueName: \"kubernetes.io/projected/8817d706-baea-4924-868d-c656652d9111-kube-api-access-ptc9p\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.197822 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pghrz\" (UniqueName: \"kubernetes.io/projected/bba258ca-d05a-417e-8a91-73e603062c20-kube-api-access-pghrz\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.212139 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbfb9d06-165a-4595-9422-d6b22e311ec2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.220320 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ntpw9"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.221578 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.225176 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdk9l\" (UniqueName: \"kubernetes.io/projected/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-kube-api-access-xdk9l\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.235980 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlqvq\" (UniqueName: \"kubernetes.io/projected/46902882-1cf1-4d7d-aa61-4502520d171f-kube-api-access-hlqvq\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.260328 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb44d\" (UniqueName: \"kubernetes.io/projected/1ffc3478-ba63-46d9-a78d-728fd442a3a2-kube-api-access-gb44d\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.291492 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpxn7\" (UniqueName: \"kubernetes.io/projected/0b664684-ced7-4027-8050-6da6e83d0fd7-kube-api-access-gpxn7\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.300013 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79a88693-6b36-417b-829e-50981ccff9f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.317582 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.336624 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c64jl"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.341055 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.361704 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.440118 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.443688 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.623472 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cnnh9"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.632743 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.690832 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.690965 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.691068 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.695176 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.695955 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.696281 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.696696 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.697073 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.697485 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.700062 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.700821 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.701924 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-certificates\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.701960 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-trusted-ca\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.702020 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl48c\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-kube-api-access-nl48c\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.702056 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-tls\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.702083 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef6caa30-be9c-438c-a494-8b54b5df218c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.702110 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-bound-sa-token\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.702165 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef6caa30-be9c-438c-a494-8b54b5df218c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.702201 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: E0124 06:55:37.702633 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.202620553 +0000 UTC m=+139.498725776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.806007 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:37 crc kubenswrapper[4675]: E0124 06:55:37.806556 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.306528799 +0000 UTC m=+139.602634022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807554 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a6820b1-d17b-4bf8-961e-ff96d8e79b72-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rmdpv\" (UID: \"0a6820b1-d17b-4bf8-961e-ff96d8e79b72\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807585 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qt7k\" (UniqueName: \"kubernetes.io/projected/b9d48866-3fcd-4d12-83a2-2aee6060d4c4-kube-api-access-2qt7k\") pod \"migrator-59844c95c7-8hf2l\" (UID: \"b9d48866-3fcd-4d12-83a2-2aee6060d4c4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807614 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-certificates\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807630 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-trusted-ca\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807741 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807770 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/04bf44e3-ad73-4db3-bf58-f4697644bef7-srv-cert\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807815 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4201e4-a1e0-4256-aa5a-67383ee87bee-secret-volume\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807848 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm5zq\" (UniqueName: \"kubernetes.io/projected/5cbb9972-a73e-4826-9457-ae4f93b8d1c8-kube-api-access-gm5zq\") pod \"ingress-canary-24zjn\" (UID: \"5cbb9972-a73e-4826-9457-ae4f93b8d1c8\") " pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807862 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-serving-cert\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807910 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw26p\" (UniqueName: \"kubernetes.io/projected/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-kube-api-access-qw26p\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807928 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e08de50b-8092-4f29-b2a8-a391b4778142-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kdjm5\" (UID: \"e08de50b-8092-4f29-b2a8-a391b4778142\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807975 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlxqt\" (UniqueName: \"kubernetes.io/projected/77311272-8b70-4772-8e4d-9a5f7d94f104-kube-api-access-qlxqt\") pod \"package-server-manager-789f6589d5-kwpzk\" (UID: \"77311272-8b70-4772-8e4d-9a5f7d94f104\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.808043 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181f90fa-40e7-4179-8866-6756a0cded18-config-volume\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.808061 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z56t\" (UniqueName: \"kubernetes.io/projected/cb716cde-084c-490b-a28f-f35c40c0adbb-kube-api-access-5z56t\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.808087 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxmnl\" (UniqueName: \"kubernetes.io/projected/6c264931-ec70-45fd-a7a3-979e2203eaf8-kube-api-access-wxmnl\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.808104 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-socket-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809550 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/04bf44e3-ad73-4db3-bf58-f4697644bef7-profile-collector-cert\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809570 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wmrp\" (UniqueName: \"kubernetes.io/projected/04bf44e3-ad73-4db3-bf58-f4697644bef7-kube-api-access-5wmrp\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809588 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/181f90fa-40e7-4179-8866-6756a0cded18-metrics-tls\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809605 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c264931-ec70-45fd-a7a3-979e2203eaf8-srv-cert\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809652 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg7vs\" (UniqueName: \"kubernetes.io/projected/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-kube-api-access-lg7vs\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809668 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-certs\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809695 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cbb9972-a73e-4826-9457-ae4f93b8d1c8-cert\") pod \"ingress-canary-24zjn\" (UID: \"5cbb9972-a73e-4826-9457-ae4f93b8d1c8\") " pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809755 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl48c\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-kube-api-access-nl48c\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809891 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4p66\" (UniqueName: \"kubernetes.io/projected/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-kube-api-access-l4p66\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809906 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cb716cde-084c-490b-a28f-f35c40c0adbb-apiservice-cert\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809926 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-plugins-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809954 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-node-bootstrap-token\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809986 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c264931-ec70-45fd-a7a3-979e2203eaf8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.810013 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llmqr\" (UniqueName: \"kubernetes.io/projected/e08de50b-8092-4f29-b2a8-a391b4778142-kube-api-access-llmqr\") pod \"control-plane-machine-set-operator-78cbb6b69f-kdjm5\" (UID: \"e08de50b-8092-4f29-b2a8-a391b4778142\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.810031 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-tls\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.810067 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-registration-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.813676 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgc22\" (UniqueName: \"kubernetes.io/projected/181f90fa-40e7-4179-8866-6756a0cded18-kube-api-access-cgc22\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.813712 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cb716cde-084c-490b-a28f-f35c40c0adbb-tmpfs\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.813689 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-trusted-ca\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814081 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef6caa30-be9c-438c-a494-8b54b5df218c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814132 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb716cde-084c-490b-a28f-f35c40c0adbb-webhook-cert\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814164 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jglzh\" (UniqueName: \"kubernetes.io/projected/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-kube-api-access-jglzh\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814280 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-mountpoint-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814306 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksxnm\" (UniqueName: \"kubernetes.io/projected/0a6820b1-d17b-4bf8-961e-ff96d8e79b72-kube-api-access-ksxnm\") pod \"multus-admission-controller-857f4d67dd-rmdpv\" (UID: \"0a6820b1-d17b-4bf8-961e-ff96d8e79b72\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814364 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-certificates\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814920 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-bound-sa-token\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.815458 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/77311272-8b70-4772-8e4d-9a5f7d94f104-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kwpzk\" (UID: \"77311272-8b70-4772-8e4d-9a5f7d94f104\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.816746 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-tls\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.818071 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef6caa30-be9c-438c-a494-8b54b5df218c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.821607 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4201e4-a1e0-4256-aa5a-67383ee87bee-config-volume\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.821687 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-csi-data-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.821797 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef6caa30-be9c-438c-a494-8b54b5df218c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.821849 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.821907 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67wr2\" (UniqueName: \"kubernetes.io/projected/0b4201e4-a1e0-4256-aa5a-67383ee87bee-kube-api-access-67wr2\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.821937 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.822002 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-config\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: E0124 06:55:37.823320 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.323304198 +0000 UTC m=+139.619409421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.823734 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef6caa30-be9c-438c-a494-8b54b5df218c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.855648 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl48c\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-kube-api-access-nl48c\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.872785 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" event={"ID":"00c16501-712c-4b60-a231-2a64e34ba677","Type":"ContainerStarted","Data":"1baee155ca04c86836e94a8a309af90387ef167a0b3873a1f4bc0c4361aabb7d"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.876325 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c64jl" event={"ID":"c66b0b0f-0581-49e6-bfa7-548678ab6de8","Type":"ContainerStarted","Data":"9f62761dfa0e23278a88b4c9d7acb6c23e771672906712e8cf7b32e35ec90e90"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.877926 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-bound-sa-token\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.896330 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" event={"ID":"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d","Type":"ContainerStarted","Data":"2617a8d5990bca5860fe83af255dca72d1f078c4ac17075407e8e2d08aa3e5d0"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.899508 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" event={"ID":"41fa7730-1346-4cd6-bb9a-b93bb377047d","Type":"ContainerStarted","Data":"91f73bf7a6f5da9df7634d2bb39dc0bbffdf7270c481fccf40d8800da117b277"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.899541 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" event={"ID":"41fa7730-1346-4cd6-bb9a-b93bb377047d","Type":"ContainerStarted","Data":"77cc24fd875d6f548090a4f4aaeb9c6d197237aa968f23fd64f7288d7c5a628d"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.911170 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" event={"ID":"4b956c8c-f12f-4622-b67d-29349ba463aa","Type":"ContainerStarted","Data":"34b9e256f0a363d6706552e34a6c518bfd3c5bc88908cd5eba01b6ceec6a3fd8"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.917638 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" event={"ID":"927fb92f-2e72-4344-90e4-cfc0357135f4","Type":"ContainerStarted","Data":"7cf4b45890be58180d5e97e3e7c1b66bcdbdabd51ec9db6736553c04ac14342e"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.923226 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924072 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-config\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924116 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a6820b1-d17b-4bf8-961e-ff96d8e79b72-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rmdpv\" (UID: \"0a6820b1-d17b-4bf8-961e-ff96d8e79b72\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924146 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qt7k\" (UniqueName: \"kubernetes.io/projected/b9d48866-3fcd-4d12-83a2-2aee6060d4c4-kube-api-access-2qt7k\") pod \"migrator-59844c95c7-8hf2l\" (UID: \"b9d48866-3fcd-4d12-83a2-2aee6060d4c4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924195 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924221 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/04bf44e3-ad73-4db3-bf58-f4697644bef7-srv-cert\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924244 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4201e4-a1e0-4256-aa5a-67383ee87bee-secret-volume\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924264 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm5zq\" (UniqueName: \"kubernetes.io/projected/5cbb9972-a73e-4826-9457-ae4f93b8d1c8-kube-api-access-gm5zq\") pod \"ingress-canary-24zjn\" (UID: \"5cbb9972-a73e-4826-9457-ae4f93b8d1c8\") " pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924284 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-serving-cert\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924308 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw26p\" (UniqueName: \"kubernetes.io/projected/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-kube-api-access-qw26p\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924333 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e08de50b-8092-4f29-b2a8-a391b4778142-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kdjm5\" (UID: \"e08de50b-8092-4f29-b2a8-a391b4778142\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924363 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlxqt\" (UniqueName: \"kubernetes.io/projected/77311272-8b70-4772-8e4d-9a5f7d94f104-kube-api-access-qlxqt\") pod \"package-server-manager-789f6589d5-kwpzk\" (UID: \"77311272-8b70-4772-8e4d-9a5f7d94f104\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924400 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181f90fa-40e7-4179-8866-6756a0cded18-config-volume\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924422 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z56t\" (UniqueName: \"kubernetes.io/projected/cb716cde-084c-490b-a28f-f35c40c0adbb-kube-api-access-5z56t\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924446 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxmnl\" (UniqueName: \"kubernetes.io/projected/6c264931-ec70-45fd-a7a3-979e2203eaf8-kube-api-access-wxmnl\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: E0124 06:55:37.925578 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.42546259 +0000 UTC m=+139.721567873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.929119 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-config\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938582 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-socket-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938632 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/04bf44e3-ad73-4db3-bf58-f4697644bef7-profile-collector-cert\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938662 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wmrp\" (UniqueName: \"kubernetes.io/projected/04bf44e3-ad73-4db3-bf58-f4697644bef7-kube-api-access-5wmrp\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938688 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/181f90fa-40e7-4179-8866-6756a0cded18-metrics-tls\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938707 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c264931-ec70-45fd-a7a3-979e2203eaf8-srv-cert\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938748 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cbb9972-a73e-4826-9457-ae4f93b8d1c8-cert\") pod \"ingress-canary-24zjn\" (UID: \"5cbb9972-a73e-4826-9457-ae4f93b8d1c8\") " pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938769 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg7vs\" (UniqueName: \"kubernetes.io/projected/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-kube-api-access-lg7vs\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938788 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-certs\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938828 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4p66\" (UniqueName: \"kubernetes.io/projected/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-kube-api-access-l4p66\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938851 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cb716cde-084c-490b-a28f-f35c40c0adbb-apiservice-cert\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938872 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-plugins-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938897 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-node-bootstrap-token\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938924 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c264931-ec70-45fd-a7a3-979e2203eaf8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938992 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llmqr\" (UniqueName: \"kubernetes.io/projected/e08de50b-8092-4f29-b2a8-a391b4778142-kube-api-access-llmqr\") pod \"control-plane-machine-set-operator-78cbb6b69f-kdjm5\" (UID: \"e08de50b-8092-4f29-b2a8-a391b4778142\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939019 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-registration-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939057 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgc22\" (UniqueName: \"kubernetes.io/projected/181f90fa-40e7-4179-8866-6756a0cded18-kube-api-access-cgc22\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939083 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cb716cde-084c-490b-a28f-f35c40c0adbb-tmpfs\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939115 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb716cde-084c-490b-a28f-f35c40c0adbb-webhook-cert\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939141 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jglzh\" (UniqueName: \"kubernetes.io/projected/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-kube-api-access-jglzh\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939165 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-mountpoint-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939203 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksxnm\" (UniqueName: \"kubernetes.io/projected/0a6820b1-d17b-4bf8-961e-ff96d8e79b72-kube-api-access-ksxnm\") pod \"multus-admission-controller-857f4d67dd-rmdpv\" (UID: \"0a6820b1-d17b-4bf8-961e-ff96d8e79b72\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939234 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/77311272-8b70-4772-8e4d-9a5f7d94f104-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kwpzk\" (UID: \"77311272-8b70-4772-8e4d-9a5f7d94f104\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939271 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4201e4-a1e0-4256-aa5a-67383ee87bee-config-volume\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939334 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-csi-data-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939379 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939419 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67wr2\" (UniqueName: \"kubernetes.io/projected/0b4201e4-a1e0-4256-aa5a-67383ee87bee-kube-api-access-67wr2\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939452 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: E0124 06:55:37.939783 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.439769117 +0000 UTC m=+139.735874340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.940309 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-socket-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.941352 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.948119 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181f90fa-40e7-4179-8866-6756a0cded18-config-volume\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.948656 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-plugins-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.948994 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a6820b1-d17b-4bf8-961e-ff96d8e79b72-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rmdpv\" (UID: \"0a6820b1-d17b-4bf8-961e-ff96d8e79b72\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.958671 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-serving-cert\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.964669 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-mountpoint-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.964696 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e08de50b-8092-4f29-b2a8-a391b4778142-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kdjm5\" (UID: \"e08de50b-8092-4f29-b2a8-a391b4778142\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.965034 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cb716cde-084c-490b-a28f-f35c40c0adbb-tmpfs\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.967870 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-registration-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.968600 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-csi-data-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.969301 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4201e4-a1e0-4256-aa5a-67383ee87bee-config-volume\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.970860 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.974076 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb716cde-084c-490b-a28f-f35c40c0adbb-webhook-cert\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.980538 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" event={"ID":"a446c38f-dc5a-4a87-ba82-3405c0aadae7","Type":"ContainerStarted","Data":"9cd3ec74e8e76aafab28c7545b7eaa9684532e7016de1d0cc3ea26114f1aeab0"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.988652 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-node-bootstrap-token\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.989181 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/04bf44e3-ad73-4db3-bf58-f4697644bef7-srv-cert\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.989554 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/04bf44e3-ad73-4db3-bf58-f4697644bef7-profile-collector-cert\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.990161 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c264931-ec70-45fd-a7a3-979e2203eaf8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.990242 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cbb9972-a73e-4826-9457-ae4f93b8d1c8-cert\") pod \"ingress-canary-24zjn\" (UID: \"5cbb9972-a73e-4826-9457-ae4f93b8d1c8\") " pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.992670 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/181f90fa-40e7-4179-8866-6756a0cded18-metrics-tls\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.992835 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4201e4-a1e0-4256-aa5a-67383ee87bee-secret-volume\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.993386 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cb716cde-084c-490b-a28f-f35c40c0adbb-apiservice-cert\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.995229 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm5zq\" (UniqueName: \"kubernetes.io/projected/5cbb9972-a73e-4826-9457-ae4f93b8d1c8-kube-api-access-gm5zq\") pod \"ingress-canary-24zjn\" (UID: \"5cbb9972-a73e-4826-9457-ae4f93b8d1c8\") " pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.997815 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c264931-ec70-45fd-a7a3-979e2203eaf8-srv-cert\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.998180 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-certs\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.998376 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/77311272-8b70-4772-8e4d-9a5f7d94f104-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kwpzk\" (UID: \"77311272-8b70-4772-8e4d-9a5f7d94f104\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.001481 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlxqt\" (UniqueName: \"kubernetes.io/projected/77311272-8b70-4772-8e4d-9a5f7d94f104-kube-api-access-qlxqt\") pod \"package-server-manager-789f6589d5-kwpzk\" (UID: \"77311272-8b70-4772-8e4d-9a5f7d94f104\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.001636 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxmnl\" (UniqueName: \"kubernetes.io/projected/6c264931-ec70-45fd-a7a3-979e2203eaf8-kube-api-access-wxmnl\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.012920 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" event={"ID":"f25231da-465a-433f-9b0b-e37a23bc59b8","Type":"ContainerStarted","Data":"9b4aaf327852c667a2ce3e6055048ca927fb6f9016c799733a2f7ec80637aacc"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.017900 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f895q"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.020508 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw26p\" (UniqueName: \"kubernetes.io/projected/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-kube-api-access-qw26p\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.033004 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xwk6j" event={"ID":"737c0ee8-629a-4935-8357-c321e1ff5a41","Type":"ContainerStarted","Data":"77e69f0250fb15e98bc52e6cb574dc7c673aa1efbd9fd1c4efb7541542931034"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.033084 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xwk6j" event={"ID":"737c0ee8-629a-4935-8357-c321e1ff5a41","Type":"ContainerStarted","Data":"4106f1c63accdee57ffae6d09bd304b56ef24ef274946518c05bf979248320d5"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.037804 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" event={"ID":"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1","Type":"ContainerStarted","Data":"2dcdddbf1d4d92d24cf207099caabfd7b1e0fac7143ab29352cf476bb04d5cf0"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.040384 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" event={"ID":"c558c558-e8e3-4914-b88e-f5299916978f","Type":"ContainerStarted","Data":"b44b30232e383de6f55574b508cd739e9b5e68efa043a5cbf2db36bb22978236"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.040711 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.040922 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.540898473 +0000 UTC m=+139.837003696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.041066 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.041345 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.541334055 +0000 UTC m=+139.837439278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.048339 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" event={"ID":"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe","Type":"ContainerStarted","Data":"42665a1813af5370cbc64f14f13ea2f1f0efbad5996ac8940983b9a839f5b75b"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.051120 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" event={"ID":"e71a6ca6-4ef8-4765-a0ae-0809a6343e38","Type":"ContainerStarted","Data":"3d51378a0f167c9625d9b20bff5ee58972a623e113e0b9b83984fa63e85d9a2e"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.053627 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" event={"ID":"3da81853-6557-4653-87f8-c2423aeb3994","Type":"ContainerStarted","Data":"002dc8c82822c04b6e62906f21869f215a480bf895ea5d7219771df426e65ddd"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.070892 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qt7k\" (UniqueName: \"kubernetes.io/projected/b9d48866-3fcd-4d12-83a2-2aee6060d4c4-kube-api-access-2qt7k\") pod \"migrator-59844c95c7-8hf2l\" (UID: \"b9d48866-3fcd-4d12-83a2-2aee6060d4c4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.074359 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" event={"ID":"7266e9dc-a776-4331-a8d5-324deae3a589","Type":"ContainerStarted","Data":"f7c8eb31ac93a3b2e00ccb0ea8038981a27cd244cb8fddd45dee178b4cea1986"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.075386 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.082877 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.090034 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgc22\" (UniqueName: \"kubernetes.io/projected/181f90fa-40e7-4179-8866-6756a0cded18-kube-api-access-cgc22\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.091087 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.101400 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4p66\" (UniqueName: \"kubernetes.io/projected/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-kube-api-access-l4p66\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.122021 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wmrp\" (UniqueName: \"kubernetes.io/projected/04bf44e3-ad73-4db3-bf58-f4697644bef7-kube-api-access-5wmrp\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.129969 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.130319 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.137917 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.145070 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.145570 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.147264 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.64723919 +0000 UTC m=+139.943344413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.147816 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.148106 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.648097161 +0000 UTC m=+139.944202384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.150804 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.153335 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.163690 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llmqr\" (UniqueName: \"kubernetes.io/projected/e08de50b-8092-4f29-b2a8-a391b4778142-kube-api-access-llmqr\") pod \"control-plane-machine-set-operator-78cbb6b69f-kdjm5\" (UID: \"e08de50b-8092-4f29-b2a8-a391b4778142\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.170712 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg7vs\" (UniqueName: \"kubernetes.io/projected/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-kube-api-access-lg7vs\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.174144 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:38 crc kubenswrapper[4675]: W0124 06:55:38.183305 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec1e1be6_05e8_4a21_9fff_f6f8437c4ebe.slice/crio-3b1b005206e13a49119e1c4a21a9b8f763576ba3949f3145ead0c3b6cf6f70de WatchSource:0}: Error finding container 3b1b005206e13a49119e1c4a21a9b8f763576ba3949f3145ead0c3b6cf6f70de: Status 404 returned error can't find the container with id 3b1b005206e13a49119e1c4a21a9b8f763576ba3949f3145ead0c3b6cf6f70de Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.183539 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.184470 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z56t\" (UniqueName: \"kubernetes.io/projected/cb716cde-084c-490b-a28f-f35c40c0adbb-kube-api-access-5z56t\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.221427 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jglzh\" (UniqueName: \"kubernetes.io/projected/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-kube-api-access-jglzh\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.241295 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67wr2\" (UniqueName: \"kubernetes.io/projected/0b4201e4-a1e0-4256-aa5a-67383ee87bee-kube-api-access-67wr2\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.258665 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.259871 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.759853323 +0000 UTC m=+140.055958546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.335035 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.360405 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.360726 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.860700463 +0000 UTC m=+140.156805686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.365959 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.377144 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x7fgx"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.400038 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.408073 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.421980 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.431274 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksxnm\" (UniqueName: \"kubernetes.io/projected/0a6820b1-d17b-4bf8-961e-ff96d8e79b72-kube-api-access-ksxnm\") pod \"multus-admission-controller-857f4d67dd-rmdpv\" (UID: \"0a6820b1-d17b-4bf8-961e-ff96d8e79b72\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.461615 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.463743 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.963680745 +0000 UTC m=+140.259785968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: W0124 06:55:38.468877 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79a88693_6b36_417b_829e_50981ccff9f7.slice/crio-d12df7158378d62533ea52ad329cd1f124073081c95da8c9cc018fb3d0627812 WatchSource:0}: Error finding container d12df7158378d62533ea52ad329cd1f124073081c95da8c9cc018fb3d0627812: Status 404 returned error can't find the container with id d12df7158378d62533ea52ad329cd1f124073081c95da8c9cc018fb3d0627812 Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.517440 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.565225 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.565562 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.06554507 +0000 UTC m=+140.361650293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.588369 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.622109 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-577lm"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.630055 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.630093 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.630589 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.647019 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.667059 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.667320 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.167305212 +0000 UTC m=+140.463410435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.715001 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.768448 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.773268 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.273250028 +0000 UTC m=+140.569355251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.774212 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.863741 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jntdn"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.884654 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.885214 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.385198395 +0000 UTC m=+140.681303618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.903829 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.915346 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:38 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:38 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:38 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.915394 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.013853 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.014517 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.514487085 +0000 UTC m=+140.810592308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.041998 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf"] Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.116876 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.117178 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.617153099 +0000 UTC m=+140.913258322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.118124 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.118494 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.618483943 +0000 UTC m=+140.914589166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.168675 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" event={"ID":"0b664684-ced7-4027-8050-6da6e83d0fd7","Type":"ContainerStarted","Data":"ef7302979665f1075ad64fec9fd7e10f599f023692381dcc8fae43558a916682"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.173201 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" event={"ID":"cbfb9d06-165a-4595-9422-d6b22e311ec2","Type":"ContainerStarted","Data":"31d19390d2e422321d73730eed9f9a86324c1a2a61cb9ee87c93467d9b46acfd"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.174772 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" event={"ID":"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47","Type":"ContainerStarted","Data":"e564a07a05a469608a051eeb2837bad6d24efdae469ccf4cf55bd5adac7d9867"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.218549 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.218991 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.718974033 +0000 UTC m=+141.015079256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.234370 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c64jl" event={"ID":"c66b0b0f-0581-49e6-bfa7-548678ab6de8","Type":"ContainerStarted","Data":"3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.238053 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l"] Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.242916 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" event={"ID":"8817d706-baea-4924-868d-c656652d9111","Type":"ContainerStarted","Data":"667091296b006971a6a8c293d0bcb09ff6c8bf8438be76e72de64a5e6a9d2113"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.247482 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5"] Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.247522 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" event={"ID":"bba258ca-d05a-417e-8a91-73e603062c20","Type":"ContainerStarted","Data":"f9521d7703eec808be2541f0783a9d10ef64dbf629866b0ef02783c511535335"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.300331 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" event={"ID":"a446c38f-dc5a-4a87-ba82-3405c0aadae7","Type":"ContainerStarted","Data":"5721e943166bebaa714e58987f2c2c1cab9d09ebfcd35e44307ea104f4abba36"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.319710 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k"] Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.320427 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.321103 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.821090904 +0000 UTC m=+141.117196127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.323706 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" event={"ID":"1ffc3478-ba63-46d9-a78d-728fd442a3a2","Type":"ContainerStarted","Data":"6d353e639a9acc9b24c141b15cb4c0467a78f6ebf6a25048129c11a485b6ee4e"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.345858 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" event={"ID":"79a88693-6b36-417b-829e-50981ccff9f7","Type":"ContainerStarted","Data":"d12df7158378d62533ea52ad329cd1f124073081c95da8c9cc018fb3d0627812"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.354293 4675 generic.go:334] "Generic (PLEG): container finished" podID="4b956c8c-f12f-4622-b67d-29349ba463aa" containerID="a1f730c602e0c7b223d6d5879afdd9c311ddb91bc4532df9ccce3e2acc2e7031" exitCode=0 Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.354357 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" event={"ID":"4b956c8c-f12f-4622-b67d-29349ba463aa","Type":"ContainerDied","Data":"a1f730c602e0c7b223d6d5879afdd9c311ddb91bc4532df9ccce3e2acc2e7031"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.366772 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" event={"ID":"f25231da-465a-433f-9b0b-e37a23bc59b8","Type":"ContainerStarted","Data":"4216243447ad9603723722b695208fab30625a2e50ba52af5149656da6860268"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.368108 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" event={"ID":"46902882-1cf1-4d7d-aa61-4502520d171f","Type":"ContainerStarted","Data":"e84e64905e1f5cc564577a182a5835196904762cdb0372049f57bbc763f06b85"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.369251 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" event={"ID":"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1","Type":"ContainerStarted","Data":"8ad8c70d2b29dc17627fb5a5f59a09c7979a056a6b600dc8c0a57fcb3dd3a090"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.373170 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" event={"ID":"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d","Type":"ContainerStarted","Data":"7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.374075 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.380204 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f895q" event={"ID":"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe","Type":"ContainerStarted","Data":"3b1b005206e13a49119e1c4a21a9b8f763576ba3949f3145ead0c3b6cf6f70de"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.382863 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" event={"ID":"c558c558-e8e3-4914-b88e-f5299916978f","Type":"ContainerStarted","Data":"22a13064c0d84be6a2c53890c19bb9f23b1b6ec4ff340a20aa558526065f733a"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.383180 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.385984 4675 patch_prober.go:28] interesting pod/console-operator-58897d9998-ntpw9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.386049 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" podUID="c558c558-e8e3-4914-b88e-f5299916978f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.390260 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:39 crc kubenswrapper[4675]: W0124 06:55:39.413845 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9d48866_3fcd_4d12_83a2_2aee6060d4c4.slice/crio-19b4fc5a44db15e45014c4d32cc04eaa79330753333c4008daa7dfe8c27e8b66 WatchSource:0}: Error finding container 19b4fc5a44db15e45014c4d32cc04eaa79330753333c4008daa7dfe8c27e8b66: Status 404 returned error can't find the container with id 19b4fc5a44db15e45014c4d32cc04eaa79330753333c4008daa7dfe8c27e8b66 Jan 24 06:55:39 crc kubenswrapper[4675]: W0124 06:55:39.414150 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode08de50b_8092_4f29_b2a8_a391b4778142.slice/crio-5bfe94c59b12b72d0659b659a550348b7c9b29ff0f08fc563d936f8cfcfa6bbe WatchSource:0}: Error finding container 5bfe94c59b12b72d0659b659a550348b7c9b29ff0f08fc563d936f8cfcfa6bbe: Status 404 returned error can't find the container with id 5bfe94c59b12b72d0659b659a550348b7c9b29ff0f08fc563d936f8cfcfa6bbe Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.425102 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.425975 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.925959243 +0000 UTC m=+141.222064456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: W0124 06:55:39.442073 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb716cde_084c_490b_a28f_f35c40c0adbb.slice/crio-dfd63c687515dbdbd17bee729e3a8d6ac7ff31af3811b055dd61388b0848974f WatchSource:0}: Error finding container dfd63c687515dbdbd17bee729e3a8d6ac7ff31af3811b055dd61388b0848974f: Status 404 returned error can't find the container with id dfd63c687515dbdbd17bee729e3a8d6ac7ff31af3811b055dd61388b0848974f Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.470042 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" podStartSLOduration=122.470021704 podStartE2EDuration="2m2.470021704s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:39.469373838 +0000 UTC m=+140.765479061" watchObservedRunningTime="2026-01-24 06:55:39.470021704 +0000 UTC m=+140.766126927" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.527890 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.529971 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.029955232 +0000 UTC m=+141.326060455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.629124 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.641830 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xwk6j" podStartSLOduration=122.641812566 podStartE2EDuration="2m2.641812566s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:39.639550519 +0000 UTC m=+140.935655752" watchObservedRunningTime="2026-01-24 06:55:39.641812566 +0000 UTC m=+140.937917789" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.645282 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.145252332 +0000 UTC m=+141.441357555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.748284 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.763449 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.263387422 +0000 UTC m=+141.559492645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.819211 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-c64jl" podStartSLOduration=122.819180126 podStartE2EDuration="2m2.819180126s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:39.816774127 +0000 UTC m=+141.112879350" watchObservedRunningTime="2026-01-24 06:55:39.819180126 +0000 UTC m=+141.115285349" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.850657 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.851617 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.351597406 +0000 UTC m=+141.647702629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.859529 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" podStartSLOduration=122.859509694 podStartE2EDuration="2m2.859509694s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:39.855626837 +0000 UTC m=+141.151732060" watchObservedRunningTime="2026-01-24 06:55:39.859509694 +0000 UTC m=+141.155614917" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.904014 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk"] Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.920609 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:39 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:39 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:39 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.920666 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.952495 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.952823 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.452810725 +0000 UTC m=+141.748915948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.053368 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.053585 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.553562092 +0000 UTC m=+141.849667315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.118049 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" podStartSLOduration=123.118018822 podStartE2EDuration="2m3.118018822s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:39.949083492 +0000 UTC m=+141.245188715" watchObservedRunningTime="2026-01-24 06:55:40.118018822 +0000 UTC m=+141.414124045" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.120438 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" podStartSLOduration=123.120431402 podStartE2EDuration="2m3.120431402s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:40.119159401 +0000 UTC m=+141.415264624" watchObservedRunningTime="2026-01-24 06:55:40.120431402 +0000 UTC m=+141.416536615" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.154837 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.155185 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.65517055 +0000 UTC m=+141.951275773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.257327 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.257937 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.757921487 +0000 UTC m=+142.054026710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.363898 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" podStartSLOduration=123.363870193 podStartE2EDuration="2m3.363870193s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:40.318421408 +0000 UTC m=+141.614526631" watchObservedRunningTime="2026-01-24 06:55:40.363870193 +0000 UTC m=+141.659975416" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.366681 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgv9v"] Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.367242 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.367579 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.867568146 +0000 UTC m=+142.163673369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.425200 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" event={"ID":"77311272-8b70-4772-8e4d-9a5f7d94f104","Type":"ContainerStarted","Data":"c89a2721bd8c45a1ac00a40e59f297434d8e9b2e853e8ba04189b5a93ad55a5d"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.426340 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" event={"ID":"e71a6ca6-4ef8-4765-a0ae-0809a6343e38","Type":"ContainerStarted","Data":"2645ff1e667ec7a9d8e38bc4961da57d67d0bb2a37e3707c6bfb6c07c3ca3054"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.427110 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jntdn" event={"ID":"4bfd659c-336a-4497-bb5b-eaf18b1118e3","Type":"ContainerStarted","Data":"f3a85470a4355bfe8eac6b16acf14d7120cdd9f3e9447ad6d3620b88fdd0f89d"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.434905 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" event={"ID":"00c16501-712c-4b60-a231-2a64e34ba677","Type":"ContainerStarted","Data":"983c342a8cd6c22283e9b1583e5c4c4bb605f159419d665477ec69b008cd9cf9"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.436137 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.443355 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m8cvs"] Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.463727 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" event={"ID":"cb716cde-084c-490b-a28f-f35c40c0adbb","Type":"ContainerStarted","Data":"dfd63c687515dbdbd17bee729e3a8d6ac7ff31af3811b055dd61388b0848974f"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.479200 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.480163 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.980136208 +0000 UTC m=+142.276241431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.487937 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" event={"ID":"5cea3fd8-8eb5-46e1-9991-ec1096d357e5","Type":"ContainerStarted","Data":"d68bea8b00c026526be03f959939477c57040be0a40f40783ac0e65d642a96db"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.497026 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" event={"ID":"e08de50b-8092-4f29-b2a8-a391b4778142","Type":"ContainerStarted","Data":"5bfe94c59b12b72d0659b659a550348b7c9b29ff0f08fc563d936f8cfcfa6bbe"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.519328 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" podStartSLOduration=123.519310066 podStartE2EDuration="2m3.519310066s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:40.507056971 +0000 UTC m=+141.803162194" watchObservedRunningTime="2026-01-24 06:55:40.519310066 +0000 UTC m=+141.815415289" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.530649 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9zjhs" event={"ID":"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b","Type":"ContainerStarted","Data":"af29d3bf9d739cd26d1a78b936ffb5d04a1c781d3a1b82a5e20a1b0ab7e39654"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.581430 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.581685 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.081674885 +0000 UTC m=+142.377780108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.590298 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" event={"ID":"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3","Type":"ContainerStarted","Data":"90b5259e216fde7ceb68fc43e10a268392f195bed3c8e5ffd441abf1bb5d215b"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.594788 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59"] Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.613199 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2"] Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.660510 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw"] Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.684932 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.685562 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.185546029 +0000 UTC m=+142.481651252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.748330 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" event={"ID":"7266e9dc-a776-4331-a8d5-324deae3a589","Type":"ContainerStarted","Data":"c5f33b8de9a539f9d5dab31ee044f987c0098468e34d182e68c9394590c9a407"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.788049 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.789266 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.28925413 +0000 UTC m=+142.585359353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.797673 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" podStartSLOduration=123.79765385 podStartE2EDuration="2m3.79765385s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:40.796510221 +0000 UTC m=+142.092615444" watchObservedRunningTime="2026-01-24 06:55:40.79765385 +0000 UTC m=+142.093759073" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.873074 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" event={"ID":"1ffc3478-ba63-46d9-a78d-728fd442a3a2","Type":"ContainerStarted","Data":"b6f24f0c0b21dab8940c7c17f138d5ab2470b6d7da019aba5adc5a75d5f5f6ed"} Jan 24 06:55:40 crc kubenswrapper[4675]: W0124 06:55:40.873658 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1a4e6f5_492a_4b32_aa94_c8eca20b0067.slice/crio-4183cc63d47ed05819d502c422e1c423e9c066190ca15b760cb785c93f9da8c8 WatchSource:0}: Error finding container 4183cc63d47ed05819d502c422e1c423e9c066190ca15b760cb785c93f9da8c8: Status 404 returned error can't find the container with id 4183cc63d47ed05819d502c422e1c423e9c066190ca15b760cb785c93f9da8c8 Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.890282 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.890388 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.390369286 +0000 UTC m=+142.686474509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.890611 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.890927 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.39092038 +0000 UTC m=+142.687025603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.917183 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:40 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:40 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:40 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.917233 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.992975 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:40.998322 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.498292702 +0000 UTC m=+142.794397925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:40.998422 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:40.998839 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.498826916 +0000 UTC m=+142.794932139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.039950 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" event={"ID":"b9d48866-3fcd-4d12-83a2-2aee6060d4c4","Type":"ContainerStarted","Data":"19b4fc5a44db15e45014c4d32cc04eaa79330753333c4008daa7dfe8c27e8b66"} Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.057104 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" podStartSLOduration=124.05708849 podStartE2EDuration="2m4.05708849s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:40.915056993 +0000 UTC m=+142.211162216" watchObservedRunningTime="2026-01-24 06:55:41.05708849 +0000 UTC m=+142.353193713" Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.058993 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pn69w"] Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.059027 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" event={"ID":"3da81853-6557-4653-87f8-c2423aeb3994","Type":"ContainerStarted","Data":"9baf934efad6645f67d8e90940cb3419ae8138e3aafe50f06ba093388c5ea163"} Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.104433 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.105474 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.605456129 +0000 UTC m=+142.901561352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.115805 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" event={"ID":"927fb92f-2e72-4344-90e4-cfc0357135f4","Type":"ContainerStarted","Data":"2c5d5a450c0cf663939a529a8839c7748144ae97ebc8568728a1faad33770d9b"} Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.124448 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" podStartSLOduration=124.124428724 podStartE2EDuration="2m4.124428724s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:41.120505245 +0000 UTC m=+142.416610468" watchObservedRunningTime="2026-01-24 06:55:41.124428724 +0000 UTC m=+142.420533947" Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.124832 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w"] Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.154167 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" event={"ID":"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe","Type":"ContainerStarted","Data":"e2d6bc71e9515d71fb3682c2840c077be52e05839a512b7ce9e2a477233b68ea"} Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.177926 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rmdpv"] Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.206555 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.207785 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.707773735 +0000 UTC m=+143.003878948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.235670 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:41 crc kubenswrapper[4675]: W0124 06:55:41.288017 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a6820b1_d17b_4bf8_961e_ff96d8e79b72.slice/crio-ce735703dd3f2d57eaf17ce526990e9a8ffb509958d0524bef38c490323d2c9f WatchSource:0}: Error finding container ce735703dd3f2d57eaf17ce526990e9a8ffb509958d0524bef38c490323d2c9f: Status 404 returned error can't find the container with id ce735703dd3f2d57eaf17ce526990e9a8ffb509958d0524bef38c490323d2c9f Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.308048 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.308479 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.80844961 +0000 UTC m=+143.104554833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.323051 4675 csr.go:261] certificate signing request csr-kvtwg is approved, waiting to be issued Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.341128 4675 csr.go:257] certificate signing request csr-kvtwg is issued Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.404198 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-24zjn"] Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.410443 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.410789 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.910777636 +0000 UTC m=+143.206882859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.514326 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.514789 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.515204 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.015187195 +0000 UTC m=+143.311292418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.618789 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.619433 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.119421769 +0000 UTC m=+143.415526992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.727200 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.727639 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.227620751 +0000 UTC m=+143.523725974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.833850 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.834169 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.334157523 +0000 UTC m=+143.630262746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.908910 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:41 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:41 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:41 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.909219 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.935105 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.935506 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.435490445 +0000 UTC m=+143.731595658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.036319 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.036750 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.536712003 +0000 UTC m=+143.832817216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.136986 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.137302 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.637275635 +0000 UTC m=+143.933380858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.137519 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.137851 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.637838509 +0000 UTC m=+143.933943722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.197066 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" event={"ID":"0b4201e4-a1e0-4256-aa5a-67383ee87bee","Type":"ContainerStarted","Data":"a7c88f78a0b2d3479a858654ffc24e4044f89c1ce4d62775bbcc5f9d5bd1b775"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.197121 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" event={"ID":"0b4201e4-a1e0-4256-aa5a-67383ee87bee","Type":"ContainerStarted","Data":"50d0cb80aa27ce6cef25c689ef2dda8afc1fb093c0efbca3d65994205d5a3a48"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.238905 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.240396 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.740376641 +0000 UTC m=+144.036481864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.247341 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" event={"ID":"bba258ca-d05a-417e-8a91-73e603062c20","Type":"ContainerStarted","Data":"6f9a6e5f01faa4072f12fcaaab2993c79791c5ae04f9d859c83f5b09af2afb4a"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.255197 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.277880 4675 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v7d6k container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.277936 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" podUID="cb716cde-084c-490b-a28f-f35c40c0adbb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.294878 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" podStartSLOduration=124.294856891 podStartE2EDuration="2m4.294856891s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.293521978 +0000 UTC m=+143.589627201" watchObservedRunningTime="2026-01-24 06:55:42.294856891 +0000 UTC m=+143.590962114" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.295984 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" podStartSLOduration=125.295978019 podStartE2EDuration="2m5.295978019s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.247630852 +0000 UTC m=+143.543736075" watchObservedRunningTime="2026-01-24 06:55:42.295978019 +0000 UTC m=+143.592083242" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.302128 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" event={"ID":"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f","Type":"ContainerStarted","Data":"ae72bad0b37788d54a2e3b0b9747c63e0bfeeadfafb6e5276baf401dc53a5940"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.334323 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" event={"ID":"e71a6ca6-4ef8-4765-a0ae-0809a6343e38","Type":"ContainerStarted","Data":"40a2f0264fb19936349e3ea9e0013037ca5660c7a2a8449535225be5bf72b044"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.340375 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.341026 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.841015604 +0000 UTC m=+144.137120827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.343345 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-24 06:50:41 +0000 UTC, rotation deadline is 2026-11-20 20:03:12.740702255 +0000 UTC Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.343375 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7213h7m30.397329801s for next certificate rotation Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.352787 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" event={"ID":"79a88693-6b36-417b-829e-50981ccff9f7","Type":"ContainerStarted","Data":"6595c5d705627f9456697d939f600b16c57bd7da43209cb30845009869d536a4"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.360306 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" event={"ID":"04bf44e3-ad73-4db3-bf58-f4697644bef7","Type":"ContainerStarted","Data":"2df4ba4ab05551df02515bab6ecbed4f60d3ed1ded667870ce8762b86eb00844"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.369592 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" event={"ID":"927fb92f-2e72-4344-90e4-cfc0357135f4","Type":"ContainerStarted","Data":"b915f19b17bc9aa527d39a1ddad01e14b8c15f51f08e54091a438bbabdfa7c28"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.371862 4675 generic.go:334] "Generic (PLEG): container finished" podID="8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47" containerID="fcc9c96fba47d3b6cf2d8428a262fd9072cd577ebc17d43cc1648f28c68c802d" exitCode=0 Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.372040 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" event={"ID":"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47","Type":"ContainerDied","Data":"fcc9c96fba47d3b6cf2d8428a262fd9072cd577ebc17d43cc1648f28c68c802d"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.386155 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" event={"ID":"0b664684-ced7-4027-8050-6da6e83d0fd7","Type":"ContainerStarted","Data":"2b696e3a0fbe09c5ecdc40742aa697ab1d4484b0bd9a2898dca77eb4588d009c"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.399776 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jntdn" event={"ID":"4bfd659c-336a-4497-bb5b-eaf18b1118e3","Type":"ContainerStarted","Data":"1932b5cccfff107a78f3758e0d5ff30c3c2c77803215de69b8db70abaf7d4c53"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.400297 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.409214 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" podStartSLOduration=125.409193168 podStartE2EDuration="2m5.409193168s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.403969547 +0000 UTC m=+143.700074790" watchObservedRunningTime="2026-01-24 06:55:42.409193168 +0000 UTC m=+143.705298391" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.410014 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" podStartSLOduration=124.410007089 podStartE2EDuration="2m4.410007089s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.366371919 +0000 UTC m=+143.662477142" watchObservedRunningTime="2026-01-24 06:55:42.410007089 +0000 UTC m=+143.706112312" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.419881 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-jntdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.420190 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jntdn" podUID="4bfd659c-336a-4497-bb5b-eaf18b1118e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.423087 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" event={"ID":"e08de50b-8092-4f29-b2a8-a391b4778142","Type":"ContainerStarted","Data":"cf163a0f075902fa516be1d8410b4cb6fcb6f82c9c111fcd8ad71a4322c13abc"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.441613 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m8cvs" event={"ID":"181f90fa-40e7-4179-8866-6756a0cded18","Type":"ContainerStarted","Data":"9e1f8e7dfb05d61fadb8e3972bd9b70c01709c8e550db9808258c0f2398b23c3"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.442742 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.442823 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.942804467 +0000 UTC m=+144.238909690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.443493 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.444536 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.94452479 +0000 UTC m=+144.240630013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.456112 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" event={"ID":"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39","Type":"ContainerStarted","Data":"a13aef70a8fa20b9b60955af495cd0e0680bc4d0d02aa29ac1a4c51bd8644a89"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.515252 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-24zjn" event={"ID":"5cbb9972-a73e-4826-9457-ae4f93b8d1c8","Type":"ContainerStarted","Data":"63c2e35c319c9a61a99d53105c17489b19ea3324117691625952d34d70aa2ee1"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.530824 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" podStartSLOduration=125.530810986 podStartE2EDuration="2m5.530810986s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.510101969 +0000 UTC m=+143.806207192" watchObservedRunningTime="2026-01-24 06:55:42.530810986 +0000 UTC m=+143.826916199" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.533237 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" event={"ID":"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3","Type":"ContainerStarted","Data":"f27948e4990ae52d60fac135f3620df2b0cf9fd5690e045f94839123007012a5"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.546386 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jntdn" podStartSLOduration=125.546372385 podStartE2EDuration="2m5.546372385s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.54058224 +0000 UTC m=+143.836687463" watchObservedRunningTime="2026-01-24 06:55:42.546372385 +0000 UTC m=+143.842477608" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.547572 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.548157 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.048132198 +0000 UTC m=+144.344237421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.557832 4675 generic.go:334] "Generic (PLEG): container finished" podID="8817d706-baea-4924-868d-c656652d9111" containerID="eef6628a01ed135f4f1fcc0eede9318ddd5643d944e656217a557f210613fbb9" exitCode=0 Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.558579 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" event={"ID":"8817d706-baea-4924-868d-c656652d9111","Type":"ContainerDied","Data":"eef6628a01ed135f4f1fcc0eede9318ddd5643d944e656217a557f210613fbb9"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.587022 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" event={"ID":"5cea3fd8-8eb5-46e1-9991-ec1096d357e5","Type":"ContainerStarted","Data":"d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.588799 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.600299 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" event={"ID":"0a6820b1-d17b-4bf8-961e-ff96d8e79b72","Type":"ContainerStarted","Data":"ce735703dd3f2d57eaf17ce526990e9a8ffb509958d0524bef38c490323d2c9f"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.601348 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" podStartSLOduration=125.601331917 podStartE2EDuration="2m5.601331917s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.600260671 +0000 UTC m=+143.896365894" watchObservedRunningTime="2026-01-24 06:55:42.601331917 +0000 UTC m=+143.897437140" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.649853 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.651392 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.151375818 +0000 UTC m=+144.447481041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.651409 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" event={"ID":"46902882-1cf1-4d7d-aa61-4502520d171f","Type":"ContainerStarted","Data":"cdeb4071b2347fae17da5d3dbfceb41d85a6b13a2fa794af07c6fce6b9ed5896"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.712023 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" event={"ID":"b9d48866-3fcd-4d12-83a2-2aee6060d4c4","Type":"ContainerStarted","Data":"bb78eda42cd1348958f68737828b3a16fb9d934ca4018caa1363e8e321deeb06"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.712074 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" event={"ID":"b9d48866-3fcd-4d12-83a2-2aee6060d4c4","Type":"ContainerStarted","Data":"3e2c864159ef50471155edbccbe526f4f16102eb5c37b262e968b15cc2b211e8"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.752543 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f895q" event={"ID":"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe","Type":"ContainerStarted","Data":"95ff64ff32d0b6a4820c854f1501312cd07b26344cd58f562b81d1798ba151b3"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.753349 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.754349 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.2543346 +0000 UTC m=+144.550439823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.784820 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" podStartSLOduration=124.784801231 podStartE2EDuration="2m4.784801231s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.713997873 +0000 UTC m=+144.010103106" watchObservedRunningTime="2026-01-24 06:55:42.784801231 +0000 UTC m=+144.080906454" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.800983 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" event={"ID":"77311272-8b70-4772-8e4d-9a5f7d94f104","Type":"ContainerStarted","Data":"7b1087015054fbea2d83191977fb3690c25eb2522a0c443012dcd1aed9683dc9"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.802053 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.831171 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" event={"ID":"6c264931-ec70-45fd-a7a3-979e2203eaf8","Type":"ContainerStarted","Data":"53d4a10d69c2a6d72ccb7dd578d52fda686b560259acc38eb525b6e7443a8858"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.831223 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" event={"ID":"6c264931-ec70-45fd-a7a3-979e2203eaf8","Type":"ContainerStarted","Data":"2454212a019b938256d7bccbd22f8cc00082ae25e1fdebbe375edf44738a7cdd"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.832787 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.853233 4675 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-44bjw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.853304 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" podUID="6c264931-ec70-45fd-a7a3-979e2203eaf8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.854790 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.855099 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.355086047 +0000 UTC m=+144.651191280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.869761 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" event={"ID":"cbfb9d06-165a-4595-9422-d6b22e311ec2","Type":"ContainerStarted","Data":"27c8b4ced38ae2441edede0de818c34a694d2df8abb5521d64288ed585956940"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.878757 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" podStartSLOduration=124.878739877 podStartE2EDuration="2m4.878739877s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.79036356 +0000 UTC m=+144.086468783" watchObservedRunningTime="2026-01-24 06:55:42.878739877 +0000 UTC m=+144.174845120" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.921035 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:42 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:42 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:42 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.921089 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.921763 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" event={"ID":"b1a4e6f5-492a-4b32-aa94-c8eca20b0067","Type":"ContainerStarted","Data":"eab0bc055c4be21ea7dee6f7dc7e94d0bda87b2e1b4295b18b3ab5807bb0774b"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.921797 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" event={"ID":"b1a4e6f5-492a-4b32-aa94-c8eca20b0067","Type":"ContainerStarted","Data":"4183cc63d47ed05819d502c422e1c423e9c066190ca15b760cb785c93f9da8c8"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.921785 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" podStartSLOduration=124.921772843 podStartE2EDuration="2m4.921772843s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.881161008 +0000 UTC m=+144.177266221" watchObservedRunningTime="2026-01-24 06:55:42.921772843 +0000 UTC m=+144.217878066" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.922515 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.947140 4675 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgv9v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.947190 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.966209 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.967212 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.467195078 +0000 UTC m=+144.763300301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.986818 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" event={"ID":"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe","Type":"ContainerStarted","Data":"8ebbdbbb0c150931aeb63fdcc192541ce8e368dff55b99ac948f0eb95f92e221"} Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.024132 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" event={"ID":"a446c38f-dc5a-4a87-ba82-3405c0aadae7","Type":"ContainerStarted","Data":"28e075f2b0b0e3d87573a00f5dbd8bd3559dd7b4ac8bfa1adeccc546fba4e100"} Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.024842 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-24zjn" podStartSLOduration=9.024827877 podStartE2EDuration="9.024827877s" podCreationTimestamp="2026-01-24 06:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.986944821 +0000 UTC m=+144.283050054" watchObservedRunningTime="2026-01-24 06:55:43.024827877 +0000 UTC m=+144.320933100" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.070493 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.072189 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.072197 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.572184621 +0000 UTC m=+144.868289844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.148793 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" podStartSLOduration=126.148772143 podStartE2EDuration="2m6.148772143s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.024823677 +0000 UTC m=+144.320928900" watchObservedRunningTime="2026-01-24 06:55:43.148772143 +0000 UTC m=+144.444877366" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.149369 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" podStartSLOduration=125.149363438 podStartE2EDuration="2m5.149363438s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.13980723 +0000 UTC m=+144.435912453" watchObservedRunningTime="2026-01-24 06:55:43.149363438 +0000 UTC m=+144.445468661" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.173298 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.175189 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.675168963 +0000 UTC m=+144.971274186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.194928 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" podStartSLOduration=126.194912226 podStartE2EDuration="2m6.194912226s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.193425439 +0000 UTC m=+144.489530662" watchObservedRunningTime="2026-01-24 06:55:43.194912226 +0000 UTC m=+144.491017449" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.283606 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-f895q" podStartSLOduration=125.283588672 podStartE2EDuration="2m5.283588672s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.241995772 +0000 UTC m=+144.538100995" watchObservedRunningTime="2026-01-24 06:55:43.283588672 +0000 UTC m=+144.579693895" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.284298 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.284695 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.784679388 +0000 UTC m=+145.080784611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.391211 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.391771 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.891754393 +0000 UTC m=+145.187859616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.429064 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" podStartSLOduration=126.429049525 podStartE2EDuration="2m6.429049525s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.423291841 +0000 UTC m=+144.719397064" watchObservedRunningTime="2026-01-24 06:55:43.429049525 +0000 UTC m=+144.725154748" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.429676 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" podStartSLOduration=126.429670271 podStartE2EDuration="2m6.429670271s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.287818337 +0000 UTC m=+144.583923560" watchObservedRunningTime="2026-01-24 06:55:43.429670271 +0000 UTC m=+144.725775494" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.490262 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" podStartSLOduration=125.490242833 podStartE2EDuration="2m5.490242833s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.478038378 +0000 UTC m=+144.774143601" watchObservedRunningTime="2026-01-24 06:55:43.490242833 +0000 UTC m=+144.786348066" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.492496 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.492812 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.992800428 +0000 UTC m=+145.288905651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.587130 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" podStartSLOduration=125.587115923 podStartE2EDuration="2m5.587115923s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.545623227 +0000 UTC m=+144.841728450" watchObservedRunningTime="2026-01-24 06:55:43.587115923 +0000 UTC m=+144.883221146" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.588838 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" podStartSLOduration=125.588833407 podStartE2EDuration="2m5.588833407s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.585956095 +0000 UTC m=+144.882061308" watchObservedRunningTime="2026-01-24 06:55:43.588833407 +0000 UTC m=+144.884938630" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.593326 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.593504 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.093478923 +0000 UTC m=+145.389584146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.593586 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.593919 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.093911343 +0000 UTC m=+145.390016556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.616014 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" podStartSLOduration=125.615998785 podStartE2EDuration="2m5.615998785s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.615856702 +0000 UTC m=+144.911961935" watchObservedRunningTime="2026-01-24 06:55:43.615998785 +0000 UTC m=+144.912104008" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.694871 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.695085 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.19505715 +0000 UTC m=+145.491162373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.695296 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.695728 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.195695886 +0000 UTC m=+145.491801109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.796876 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.797010 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.296991626 +0000 UTC m=+145.593096849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.797114 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.797463 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.297453949 +0000 UTC m=+145.593559172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.898686 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.899104 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.399089027 +0000 UTC m=+145.695194250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.906438 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:43 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:43 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:43 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.906490 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.000061 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.000396 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.500384378 +0000 UTC m=+145.796489611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.086150 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" event={"ID":"cb716cde-084c-490b-a28f-f35c40c0adbb","Type":"ContainerStarted","Data":"295109398b966303fc6252dcbd66040bc73e914ef4d33d0da91ed4040c1b70fd"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.088813 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" event={"ID":"46902882-1cf1-4d7d-aa61-4502520d171f","Type":"ContainerStarted","Data":"f1caaca62e8b38a020d9b4a1eb6772a6d7cea588dd4daf5947509156a1f7a16c"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.090112 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9zjhs" event={"ID":"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b","Type":"ContainerStarted","Data":"e9e740d3a132526908b8720bb17f2a3851e13ef991b71779474cc296a17045d6"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.091252 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" event={"ID":"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39","Type":"ContainerStarted","Data":"d4dd9a29466928a9c0b9e9fe28dddaf91099cf4147f87e645539a7f9a1a82847"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.092938 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" event={"ID":"8817d706-baea-4924-868d-c656652d9111","Type":"ContainerStarted","Data":"3313826e4b98685bb861cc2e2f131b247e1d9e92b556788fba6f96e0787f1740"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.094514 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" event={"ID":"04bf44e3-ad73-4db3-bf58-f4697644bef7","Type":"ContainerStarted","Data":"3f5b2bcc177dfad596e29552f4fe678709ee3f753687226f9fa03acd57432393"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.095107 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.096962 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" event={"ID":"bba258ca-d05a-417e-8a91-73e603062c20","Type":"ContainerStarted","Data":"1aa5dfc577d5005407ec464568b7f015321b23b063a2757a79bd26dffd9ff55e"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.099829 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" event={"ID":"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47","Type":"ContainerStarted","Data":"aebee9ecc8fc808840adcbceb7d3f00740ec5905aea20caf9529c8aa384e7173"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.099902 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.100687 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.100807 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.600790596 +0000 UTC m=+145.896895819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.100997 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.101301 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.601293009 +0000 UTC m=+145.897398232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.101924 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m8cvs" event={"ID":"181f90fa-40e7-4179-8866-6756a0cded18","Type":"ContainerStarted","Data":"2086666a390c118d19c489c59dcf471cccfa21b7532afa5edb2ae0283c9f7fe9"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.101956 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m8cvs" event={"ID":"181f90fa-40e7-4179-8866-6756a0cded18","Type":"ContainerStarted","Data":"ff78880942f01c00143fd5ef30061c3be3692bc4821f3108aea6bac4f8c3ada5"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.102008 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.103378 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-24zjn" event={"ID":"5cbb9972-a73e-4826-9457-ae4f93b8d1c8","Type":"ContainerStarted","Data":"06c7f1cd340d501a7f56eb462783c74556dedefa56dba3599e3235f3a5767d57"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.104922 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" event={"ID":"77311272-8b70-4772-8e4d-9a5f7d94f104","Type":"ContainerStarted","Data":"500e6fc86b5b02eebeb9356e6c93f528f10b9a56090d8025e89eede189cdfe1b"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.107098 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" event={"ID":"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f","Type":"ContainerStarted","Data":"5ff78c16547708a8e9e7608592ee98488027b26a259bf3267a5230de2dcde65a"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.109348 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" event={"ID":"4b956c8c-f12f-4622-b67d-29349ba463aa","Type":"ContainerStarted","Data":"1b50988bb14c7407f499b4a56fd36a9378bccfbd57513778951be0ec7f22e0f4"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.109475 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" event={"ID":"4b956c8c-f12f-4622-b67d-29349ba463aa","Type":"ContainerStarted","Data":"55249ffde6422eeafbb4a0bce8213065861aa16362d7b7757e67fe7850d5c333"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.111035 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" event={"ID":"0a6820b1-d17b-4bf8-961e-ff96d8e79b72","Type":"ContainerStarted","Data":"375101e380f61cd471897c555619b473535593a56b2bdbc73a3521f693f430b8"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.111088 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" event={"ID":"0a6820b1-d17b-4bf8-961e-ff96d8e79b72","Type":"ContainerStarted","Data":"95f622a851dd1b4269f4320a8f660acab9ead77ee7958113bb993b73d9e627ca"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.111848 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-jntdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.111885 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jntdn" podUID="4bfd659c-336a-4497-bb5b-eaf18b1118e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.111998 4675 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgv9v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.112060 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.128512 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.153551 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9zjhs" podStartSLOduration=10.153535634 podStartE2EDuration="10.153535634s" podCreationTimestamp="2026-01-24 06:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.151131453 +0000 UTC m=+145.447236676" watchObservedRunningTime="2026-01-24 06:55:44.153535634 +0000 UTC m=+145.449640857" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.196930 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.201548 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.201765 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.701733697 +0000 UTC m=+145.997838920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.202269 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.206227 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.70621023 +0000 UTC m=+146.002315453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.304128 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.304327 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.80429849 +0000 UTC m=+146.100403713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.304391 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.304776 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.804761831 +0000 UTC m=+146.100867064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.329538 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" podStartSLOduration=126.32951925 podStartE2EDuration="2m6.32951925s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.319761536 +0000 UTC m=+145.615866769" watchObservedRunningTime="2026-01-24 06:55:44.32951925 +0000 UTC m=+145.625624473" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.330958 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" podStartSLOduration=127.330948496 podStartE2EDuration="2m7.330948496s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.264030264 +0000 UTC m=+145.560135487" watchObservedRunningTime="2026-01-24 06:55:44.330948496 +0000 UTC m=+145.627053719" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.405412 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.405605 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.90558153 +0000 UTC m=+146.201686753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.405762 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.406073 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.906065362 +0000 UTC m=+146.202170585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.424503 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" podStartSLOduration=126.424488972 podStartE2EDuration="2m6.424488972s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.423068176 +0000 UTC m=+145.719173399" watchObservedRunningTime="2026-01-24 06:55:44.424488972 +0000 UTC m=+145.720594195" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.460130 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" podStartSLOduration=126.460112152 podStartE2EDuration="2m6.460112152s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.457832245 +0000 UTC m=+145.753937468" watchObservedRunningTime="2026-01-24 06:55:44.460112152 +0000 UTC m=+145.756217375" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.506865 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.507168 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.007153357 +0000 UTC m=+146.303258580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.525356 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-m8cvs" podStartSLOduration=10.525336571 podStartE2EDuration="10.525336571s" podCreationTimestamp="2026-01-24 06:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.520940611 +0000 UTC m=+145.817045834" watchObservedRunningTime="2026-01-24 06:55:44.525336571 +0000 UTC m=+145.821441794" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.605180 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" podStartSLOduration=127.605162575 podStartE2EDuration="2m7.605162575s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.602740445 +0000 UTC m=+145.898845678" watchObservedRunningTime="2026-01-24 06:55:44.605162575 +0000 UTC m=+145.901267798" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.608646 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.608983 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.108970751 +0000 UTC m=+146.405075974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.709946 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.710470 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.210452176 +0000 UTC m=+146.506557399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.724226 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" podStartSLOduration=126.72421128 podStartE2EDuration="2m6.72421128s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.635596996 +0000 UTC m=+145.931702219" watchObservedRunningTime="2026-01-24 06:55:44.72421128 +0000 UTC m=+146.020316503" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.811787 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.812186 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.312128326 +0000 UTC m=+146.608233549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.908038 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:44 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:44 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:44 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.908340 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.912900 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.913086 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.413060247 +0000 UTC m=+146.709165470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.913444 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.913783 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.413768915 +0000 UTC m=+146.709874138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.014758 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.015055 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.515038645 +0000 UTC m=+146.811143868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.087816 4675 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v7d6k container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.087883 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" podUID="cb716cde-084c-490b-a28f-f35c40c0adbb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.116179 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.116518 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.61650204 +0000 UTC m=+146.912607263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.118777 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" event={"ID":"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f","Type":"ContainerStarted","Data":"b377b96e7448287f254edc49511104be79d19d3744866afe64ddeb44ab0a89c4"} Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.119827 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-jntdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.119881 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jntdn" podUID="4bfd659c-336a-4497-bb5b-eaf18b1118e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.120153 4675 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgv9v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.120172 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.217581 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.217779 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.717750819 +0000 UTC m=+147.013856042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.218200 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.219151 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.719143143 +0000 UTC m=+147.015248366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.319373 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.319488 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.81947207 +0000 UTC m=+147.115577293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.319782 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.320212 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.820192038 +0000 UTC m=+147.116297261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.338895 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.421343 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.421460 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.921437346 +0000 UTC m=+147.217542569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.421515 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.421861 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.921851878 +0000 UTC m=+147.217957101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.490444 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l7z59"] Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.491362 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.497929 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.522833 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.523024 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.022991344 +0000 UTC m=+147.319096567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.523194 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm4jp\" (UniqueName: \"kubernetes.io/projected/1165063b-e2f9-406a-86c7-0559c419d043-kube-api-access-rm4jp\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.523344 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.523444 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-catalog-content\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.523568 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-utilities\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.523682 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.023670531 +0000 UTC m=+147.319775754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.625180 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.625402 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.125370321 +0000 UTC m=+147.421475544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.625740 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm4jp\" (UniqueName: \"kubernetes.io/projected/1165063b-e2f9-406a-86c7-0559c419d043-kube-api-access-rm4jp\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.625850 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.625903 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-catalog-content\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.625921 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-utilities\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.626589 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-catalog-content\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.626654 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-utilities\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.626930 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.12691294 +0000 UTC m=+147.423018253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.653410 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l7z59"] Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.690236 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm4jp\" (UniqueName: \"kubernetes.io/projected/1165063b-e2f9-406a-86c7-0559c419d043-kube-api-access-rm4jp\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.700459 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gmxj8"] Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.727802 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.728384 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.228358975 +0000 UTC m=+147.524464208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.737777 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.738907 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmxj8"] Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.742629 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.805253 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.838031 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-catalog-content\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.838075 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-utilities\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.838117 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szrw2\" (UniqueName: \"kubernetes.io/projected/58002d63-9bc7-4470-a2ae-9be6e2828136-kube-api-access-szrw2\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.838139 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.838415 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.338403933 +0000 UTC m=+147.634509156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.903784 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mrxqr"] Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.904958 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.908188 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:45 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:45 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:45 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.908238 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.926287 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrxqr"] Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.940291 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.940817 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.440796201 +0000 UTC m=+147.736901434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.940818 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m6sc\" (UniqueName: \"kubernetes.io/projected/69537bd3-d5fe-4baf-a1dc-16c366f2518b-kube-api-access-6m6sc\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941101 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-utilities\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941204 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941298 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941403 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szrw2\" (UniqueName: \"kubernetes.io/projected/58002d63-9bc7-4470-a2ae-9be6e2828136-kube-api-access-szrw2\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941512 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941624 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941750 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941877 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-utilities\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.942010 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-catalog-content\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.942108 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-catalog-content\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.943073 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.443055987 +0000 UTC m=+147.739161210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.943590 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-utilities\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.944124 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-catalog-content\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.950797 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.952924 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.953802 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.962235 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.991523 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szrw2\" (UniqueName: \"kubernetes.io/projected/58002d63-9bc7-4470-a2ae-9be6e2828136-kube-api-access-szrw2\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.991747 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.047526 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.048102 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-utilities\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.048166 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-catalog-content\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.048193 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m6sc\" (UniqueName: \"kubernetes.io/projected/69537bd3-d5fe-4baf-a1dc-16c366f2518b-kube-api-access-6m6sc\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.048591 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.548575743 +0000 UTC m=+147.844680966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.049012 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-utilities\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.049217 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-catalog-content\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.057107 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.076493 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m6sc\" (UniqueName: \"kubernetes.io/projected/69537bd3-d5fe-4baf-a1dc-16c366f2518b-kube-api-access-6m6sc\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.084533 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gtt58"] Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.086818 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.119621 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtt58"] Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.127874 4675 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-b7h9m container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.127917 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" podUID="8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.155802 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6rbj\" (UniqueName: \"kubernetes.io/projected/06c92a2c-0b68-4b8f-92b3-9688aef50674-kube-api-access-z6rbj\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.155839 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.155861 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-utilities\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.155889 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-catalog-content\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.156163 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.656152071 +0000 UTC m=+147.952257294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.172958 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.182982 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.213057 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" event={"ID":"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f","Type":"ContainerStarted","Data":"35f8bbea3430ba0b99f480831ab389faf48f9bab4202333bac1e88588d1cd56f"} Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.228263 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.257311 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.257496 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6rbj\" (UniqueName: \"kubernetes.io/projected/06c92a2c-0b68-4b8f-92b3-9688aef50674-kube-api-access-z6rbj\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.257531 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-utilities\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.257565 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-catalog-content\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.258272 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.758256341 +0000 UTC m=+148.054361564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.258989 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-utilities\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.259323 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-catalog-content\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.306508 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6rbj\" (UniqueName: \"kubernetes.io/projected/06c92a2c-0b68-4b8f-92b3-9688aef50674-kube-api-access-z6rbj\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.358333 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.358633 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.858621319 +0000 UTC m=+148.154726542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.433018 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.459419 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.459784 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.959766605 +0000 UTC m=+148.255871828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.509146 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.510141 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.548160 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l7z59"] Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.561133 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.562154 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.062142843 +0000 UTC m=+148.358248066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.566780 4675 patch_prober.go:28] interesting pod/apiserver-76f77b778f-s7phr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]log ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]etcd ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/generic-apiserver-start-informers ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/max-in-flight-filter ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 24 06:55:46 crc kubenswrapper[4675]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 24 06:55:46 crc kubenswrapper[4675]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/project.openshift.io-projectcache ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/openshift.io-startinformers ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 24 06:55:46 crc kubenswrapper[4675]: livez check failed Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.566816 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" podUID="4b956c8c-f12f-4622-b67d-29349ba463aa" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.662332 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.662652 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.162631733 +0000 UTC m=+148.458736956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.731145 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.767504 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.768828 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.268811496 +0000 UTC m=+148.564916719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.860022 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.862951 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.869710 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.870108 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.370080016 +0000 UTC m=+148.666185239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.881345 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.888658 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.909313 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.923454 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.923581 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.959204 4675 patch_prober.go:28] interesting pod/console-f9d7485db-c64jl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.959259 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c64jl" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.959535 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:46 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:46 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:46 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.959584 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.967804 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.977700 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7f615e0-e12d-48da-8f84-a37b15b77580-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.977823 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7f615e0-e12d-48da-8f84-a37b15b77580-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.977943 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.980169 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.480158676 +0000 UTC m=+148.776263899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.980760 4675 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.080290 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.080775 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7f615e0-e12d-48da-8f84-a37b15b77580-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.080812 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7f615e0-e12d-48da-8f84-a37b15b77580-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.080886 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7f615e0-e12d-48da-8f84-a37b15b77580-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:47 crc kubenswrapper[4675]: E0124 06:55:47.080959 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.580943884 +0000 UTC m=+148.877049107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.095001 4675 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-24T06:55:46.980786201Z","Handler":null,"Name":""} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.126098 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7f615e0-e12d-48da-8f84-a37b15b77580-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.176783 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmxj8"] Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.184810 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:47 crc kubenswrapper[4675]: E0124 06:55:47.185114 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.685103665 +0000 UTC m=+148.981208888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.213073 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.227607 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerStarted","Data":"ee8ef93d6dbda9d79ddf1313f70a0d90a2db3cc78f034c04f57e14a671da3bf7"} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.276077 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" event={"ID":"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f","Type":"ContainerStarted","Data":"b9fc134cc520444293aac16ca1f74eff20249b5ab35b473d68cd909c47f97ad9"} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.296447 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:47 crc kubenswrapper[4675]: E0124 06:55:47.296815 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.796799396 +0000 UTC m=+149.092904619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.310013 4675 generic.go:334] "Generic (PLEG): container finished" podID="1165063b-e2f9-406a-86c7-0559c419d043" containerID="59eb245fda115973b3f277ca4c5731837caa16e3bd2b40daf6b31eeaebc1bf72" exitCode=0 Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.310072 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7z59" event={"ID":"1165063b-e2f9-406a-86c7-0559c419d043","Type":"ContainerDied","Data":"59eb245fda115973b3f277ca4c5731837caa16e3bd2b40daf6b31eeaebc1bf72"} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.310097 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7z59" event={"ID":"1165063b-e2f9-406a-86c7-0559c419d043","Type":"ContainerStarted","Data":"207126b350e6a988e2c0611799f1606a299f405d91cfae55c96cd51fac72006a"} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.314918 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" podStartSLOduration=13.314907718 podStartE2EDuration="13.314907718s" podCreationTimestamp="2026-01-24 06:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:47.312255352 +0000 UTC m=+148.608360575" watchObservedRunningTime="2026-01-24 06:55:47.314907718 +0000 UTC m=+148.611012941" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.316593 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.322015 4675 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.322052 4675 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.331062 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"01de4ee9121d4c485f2b2face8eb6d72997e41317d1a52edd515d2097882bad2"} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.331356 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"66922b35ef91651141df068d64d779e1a4692f12bb83833c9d415682d6b8d139"} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.357546 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrxqr"] Jan 24 06:55:47 crc kubenswrapper[4675]: W0124 06:55:47.367173 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69537bd3_d5fe_4baf_a1dc_16c366f2518b.slice/crio-2ae5b1917c1a3d53b8698eeecb43dd8b9b50c129480e9082905b7fc129076739 WatchSource:0}: Error finding container 2ae5b1917c1a3d53b8698eeecb43dd8b9b50c129480e9082905b7fc129076739: Status 404 returned error can't find the container with id 2ae5b1917c1a3d53b8698eeecb43dd8b9b50c129480e9082905b7fc129076739 Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.401413 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.437286 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtt58"] Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.504632 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f482d"] Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.512793 4675 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.512842 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.515269 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.524054 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.537184 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f482d"] Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.605084 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-utilities\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.605115 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zq9\" (UniqueName: \"kubernetes.io/projected/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-kube-api-access-s4zq9\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.605150 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-catalog-content\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: W0124 06:55:47.632767 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-0d064aea4a33cd49d14d603d4ee871844d37da35ad2cb0962ffbc1070bbceb2a WatchSource:0}: Error finding container 0d064aea4a33cd49d14d603d4ee871844d37da35ad2cb0962ffbc1070bbceb2a: Status 404 returned error can't find the container with id 0d064aea4a33cd49d14d603d4ee871844d37da35ad2cb0962ffbc1070bbceb2a Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.694128 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.699243 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.700588 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-jntdn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.700603 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-jntdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.700633 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jntdn" podUID="4bfd659c-336a-4497-bb5b-eaf18b1118e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.700657 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jntdn" podUID="4bfd659c-336a-4497-bb5b-eaf18b1118e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.706866 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-utilities\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.706909 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zq9\" (UniqueName: \"kubernetes.io/projected/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-kube-api-access-s4zq9\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.706954 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-catalog-content\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.707308 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-catalog-content\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.707510 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-utilities\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.721889 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.734935 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zq9\" (UniqueName: \"kubernetes.io/projected/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-kube-api-access-s4zq9\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.764641 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.904884 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.906296 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hrjxs"] Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.906354 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:47 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:47 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:47 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.906378 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.907258 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.984108 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrjxs"] Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.010738 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-utilities\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.010791 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-catalog-content\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.010809 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz7td\" (UniqueName: \"kubernetes.io/projected/b8bbe037-b253-4db3-b0f5-d02a51ca300e-kube-api-access-rz7td\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.041836 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.111763 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.112008 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-utilities\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.112058 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-catalog-content\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.112083 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz7td\" (UniqueName: \"kubernetes.io/projected/b8bbe037-b253-4db3-b0f5-d02a51ca300e-kube-api-access-rz7td\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.112934 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-utilities\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.113140 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-catalog-content\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.129511 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz7td\" (UniqueName: \"kubernetes.io/projected/b8bbe037-b253-4db3-b0f5-d02a51ca300e-kube-api-access-rz7td\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.153427 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.180258 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.196167 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.280774 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.329343 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f482d"] Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.356936 4675 generic.go:334] "Generic (PLEG): container finished" podID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerID="27e8eea471043c3df40a37d217289a9d5547edf9d7cc2d893fdce5d2d206a098" exitCode=0 Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.356998 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerDied","Data":"27e8eea471043c3df40a37d217289a9d5547edf9d7cc2d893fdce5d2d206a098"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.369107 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0d064aea4a33cd49d14d603d4ee871844d37da35ad2cb0962ffbc1070bbceb2a"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.370686 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c7f615e0-e12d-48da-8f84-a37b15b77580","Type":"ContainerStarted","Data":"f51015446de6a0acf88d6eedc44df6fc45c130e0b6934676c9bca0f1932b28fa"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.372156 4675 generic.go:334] "Generic (PLEG): container finished" podID="0b4201e4-a1e0-4256-aa5a-67383ee87bee" containerID="a7c88f78a0b2d3479a858654ffc24e4044f89c1ce4d62775bbcc5f9d5bd1b775" exitCode=0 Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.372222 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" event={"ID":"0b4201e4-a1e0-4256-aa5a-67383ee87bee","Type":"ContainerDied","Data":"a7c88f78a0b2d3479a858654ffc24e4044f89c1ce4d62775bbcc5f9d5bd1b775"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.405620 4675 generic.go:334] "Generic (PLEG): container finished" podID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerID="c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1" exitCode=0 Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.405676 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerDied","Data":"c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.405702 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerStarted","Data":"bbfda04626fc4ed4b4d8b4cd5fb06a28fc11f612333f3084a7f09445e8606bac"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.429253 4675 generic.go:334] "Generic (PLEG): container finished" podID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerID="e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429" exitCode=0 Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.429331 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrxqr" event={"ID":"69537bd3-d5fe-4baf-a1dc-16c366f2518b","Type":"ContainerDied","Data":"e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.429357 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrxqr" event={"ID":"69537bd3-d5fe-4baf-a1dc-16c366f2518b","Type":"ContainerStarted","Data":"2ae5b1917c1a3d53b8698eeecb43dd8b9b50c129480e9082905b7fc129076739"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.461375 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d4a25aee26f75d07e8f243dc1b721d40717c874d59c69811a98bae5a542f4913"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.473857 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.702405 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrjxs"] Jan 24 06:55:48 crc kubenswrapper[4675]: W0124 06:55:48.724556 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8bbe037_b253_4db3_b0f5_d02a51ca300e.slice/crio-6ebfde1b827bc87808d9c9e67a4bac6a8b9d56cf9e02df2ef1e3cfa3213e6b8f WatchSource:0}: Error finding container 6ebfde1b827bc87808d9c9e67a4bac6a8b9d56cf9e02df2ef1e3cfa3213e6b8f: Status 404 returned error can't find the container with id 6ebfde1b827bc87808d9c9e67a4bac6a8b9d56cf9e02df2ef1e3cfa3213e6b8f Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.762555 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qkls6"] Jan 24 06:55:48 crc kubenswrapper[4675]: W0124 06:55:48.769346 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef6caa30_be9c_438c_a494_8b54b5df218c.slice/crio-26c4e5324526d12f61e99166a7be6e0bc691153acfad2f89632826b7fd39d68c WatchSource:0}: Error finding container 26c4e5324526d12f61e99166a7be6e0bc691153acfad2f89632826b7fd39d68c: Status 404 returned error can't find the container with id 26c4e5324526d12f61e99166a7be6e0bc691153acfad2f89632826b7fd39d68c Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.903109 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6vjtj"] Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.904390 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.906865 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.908609 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:48 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:48 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:48 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.908657 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.920181 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6vjtj"] Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.939277 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-utilities\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.939356 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-catalog-content\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.939418 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfwmj\" (UniqueName: \"kubernetes.io/projected/26a336bf-741a-462c-bafd-9ff5e4838956-kube-api-access-wfwmj\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.962175 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.040526 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfwmj\" (UniqueName: \"kubernetes.io/projected/26a336bf-741a-462c-bafd-9ff5e4838956-kube-api-access-wfwmj\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.040604 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-utilities\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.040662 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-catalog-content\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.041163 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-catalog-content\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.041202 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-utilities\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.089278 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfwmj\" (UniqueName: \"kubernetes.io/projected/26a336bf-741a-462c-bafd-9ff5e4838956-kube-api-access-wfwmj\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.281181 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ljvrz"] Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.282293 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.300607 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ljvrz"] Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.315210 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.344701 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-catalog-content\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.344827 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-utilities\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.344862 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j2j9\" (UniqueName: \"kubernetes.io/projected/bb9cd470-4963-4979-b7f6-50a2969febf8-kube-api-access-7j2j9\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.394494 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.395200 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.402125 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.402653 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.406799 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.446267 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.446969 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.447016 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-catalog-content\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.447039 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-utilities\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.447068 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j2j9\" (UniqueName: \"kubernetes.io/projected/bb9cd470-4963-4979-b7f6-50a2969febf8-kube-api-access-7j2j9\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.447655 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-catalog-content\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.447884 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-utilities\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.488937 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bdc3723a15e52a9e1243b9b7561aa37625c0f986209cd1ae4107c1f1d1e2cc4b"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.494184 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j2j9\" (UniqueName: \"kubernetes.io/projected/bb9cd470-4963-4979-b7f6-50a2969febf8-kube-api-access-7j2j9\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.494853 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a5a0d1daf3d4293e16f4f912242be249a82cc3d2aee3fe21fab075252eecb4c5"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.495619 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.497199 4675 generic.go:334] "Generic (PLEG): container finished" podID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerID="0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a" exitCode=0 Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.497248 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrjxs" event={"ID":"b8bbe037-b253-4db3-b0f5-d02a51ca300e","Type":"ContainerDied","Data":"0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.497269 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrjxs" event={"ID":"b8bbe037-b253-4db3-b0f5-d02a51ca300e","Type":"ContainerStarted","Data":"6ebfde1b827bc87808d9c9e67a4bac6a8b9d56cf9e02df2ef1e3cfa3213e6b8f"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.519440 4675 generic.go:334] "Generic (PLEG): container finished" podID="c7f615e0-e12d-48da-8f84-a37b15b77580" containerID="156ee2cddf28e8263cf6105f2729141aefac521498a1d0cf537c8cb286858f52" exitCode=0 Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.519516 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c7f615e0-e12d-48da-8f84-a37b15b77580","Type":"ContainerDied","Data":"156ee2cddf28e8263cf6105f2729141aefac521498a1d0cf537c8cb286858f52"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.521988 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" event={"ID":"ef6caa30-be9c-438c-a494-8b54b5df218c","Type":"ContainerStarted","Data":"b38f62575b27bbaf36bcbcd3b1779bbb533c0972feed363dabac23c4bdb0e727"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.522024 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" event={"ID":"ef6caa30-be9c-438c-a494-8b54b5df218c","Type":"ContainerStarted","Data":"26c4e5324526d12f61e99166a7be6e0bc691153acfad2f89632826b7fd39d68c"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.522268 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.523288 4675 generic.go:334] "Generic (PLEG): container finished" podID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerID="eb71624ab1714e3b868179ef8715f76bbb985e9ee1eb32ef5ea46430a5377ae3" exitCode=0 Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.524121 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f482d" event={"ID":"0eebacf7-e6c0-4fad-a868-ed067f1b1acc","Type":"ContainerDied","Data":"eb71624ab1714e3b868179ef8715f76bbb985e9ee1eb32ef5ea46430a5377ae3"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.524140 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f482d" event={"ID":"0eebacf7-e6c0-4fad-a868-ed067f1b1acc","Type":"ContainerStarted","Data":"236fdbdb18d6c63d9e15929a4a294f390be7b67da4280698a6623d3338464d82"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.548403 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.548439 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.549233 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.591342 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.597459 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.618195 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" podStartSLOduration=132.618169066 podStartE2EDuration="2m12.618169066s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:49.568068204 +0000 UTC m=+150.864173447" watchObservedRunningTime="2026-01-24 06:55:49.618169066 +0000 UTC m=+150.914274309" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.677239 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6vjtj"] Jan 24 06:55:49 crc kubenswrapper[4675]: W0124 06:55:49.687035 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26a336bf_741a_462c_bafd_9ff5e4838956.slice/crio-17da6fe8ca04e09fc66d1d33d6a6d431e601614c667f17a5807f9476665435d9 WatchSource:0}: Error finding container 17da6fe8ca04e09fc66d1d33d6a6d431e601614c667f17a5807f9476665435d9: Status 404 returned error can't find the container with id 17da6fe8ca04e09fc66d1d33d6a6d431e601614c667f17a5807f9476665435d9 Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.714407 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.920890 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:49 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:49 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:49 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.920945 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.953540 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.956096 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4201e4-a1e0-4256-aa5a-67383ee87bee-config-volume\") pod \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.956136 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4201e4-a1e0-4256-aa5a-67383ee87bee-secret-volume\") pod \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.956180 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67wr2\" (UniqueName: \"kubernetes.io/projected/0b4201e4-a1e0-4256-aa5a-67383ee87bee-kube-api-access-67wr2\") pod \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.958324 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b4201e4-a1e0-4256-aa5a-67383ee87bee-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b4201e4-a1e0-4256-aa5a-67383ee87bee" (UID: "0b4201e4-a1e0-4256-aa5a-67383ee87bee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.992070 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4201e4-a1e0-4256-aa5a-67383ee87bee-kube-api-access-67wr2" (OuterVolumeSpecName: "kube-api-access-67wr2") pod "0b4201e4-a1e0-4256-aa5a-67383ee87bee" (UID: "0b4201e4-a1e0-4256-aa5a-67383ee87bee"). InnerVolumeSpecName "kube-api-access-67wr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.992268 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4201e4-a1e0-4256-aa5a-67383ee87bee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0b4201e4-a1e0-4256-aa5a-67383ee87bee" (UID: "0b4201e4-a1e0-4256-aa5a-67383ee87bee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.068659 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4201e4-a1e0-4256-aa5a-67383ee87bee-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.068687 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67wr2\" (UniqueName: \"kubernetes.io/projected/0b4201e4-a1e0-4256-aa5a-67383ee87bee-kube-api-access-67wr2\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.068697 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4201e4-a1e0-4256-aa5a-67383ee87bee-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.279658 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ljvrz"] Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.298858 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 06:55:50 crc kubenswrapper[4675]: W0124 06:55:50.311416 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb9cd470_4963_4979_b7f6_50a2969febf8.slice/crio-782af23f4c0ebc3cdc7f1ab86c6d561da3e3393c3b0327b4b3a743335cbd6971 WatchSource:0}: Error finding container 782af23f4c0ebc3cdc7f1ab86c6d561da3e3393c3b0327b4b3a743335cbd6971: Status 404 returned error can't find the container with id 782af23f4c0ebc3cdc7f1ab86c6d561da3e3393c3b0327b4b3a743335cbd6971 Jan 24 06:55:50 crc kubenswrapper[4675]: W0124 06:55:50.312481 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0835f7bf_325d_42e6_bc79_9c65c68ba95e.slice/crio-8b73efc6e7d667a46e0bf22c42d6682fa8615c1dbec75a0e4be976772c2d449c WatchSource:0}: Error finding container 8b73efc6e7d667a46e0bf22c42d6682fa8615c1dbec75a0e4be976772c2d449c: Status 404 returned error can't find the container with id 8b73efc6e7d667a46e0bf22c42d6682fa8615c1dbec75a0e4be976772c2d449c Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.532201 4675 generic.go:334] "Generic (PLEG): container finished" podID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerID="28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0" exitCode=0 Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.532322 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerDied","Data":"28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0"} Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.532543 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerStarted","Data":"782af23f4c0ebc3cdc7f1ab86c6d561da3e3393c3b0327b4b3a743335cbd6971"} Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.543323 4675 generic.go:334] "Generic (PLEG): container finished" podID="26a336bf-741a-462c-bafd-9ff5e4838956" containerID="950ce170714980389fc4fdc60fb6c50ac2d025bc7af1f23de6767552eb91501f" exitCode=0 Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.543430 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerDied","Data":"950ce170714980389fc4fdc60fb6c50ac2d025bc7af1f23de6767552eb91501f"} Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.543455 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerStarted","Data":"17da6fe8ca04e09fc66d1d33d6a6d431e601614c667f17a5807f9476665435d9"} Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.551033 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0835f7bf-325d-42e6-bc79-9c65c68ba95e","Type":"ContainerStarted","Data":"8b73efc6e7d667a46e0bf22c42d6682fa8615c1dbec75a0e4be976772c2d449c"} Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.568642 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.569307 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" event={"ID":"0b4201e4-a1e0-4256-aa5a-67383ee87bee","Type":"ContainerDied","Data":"50d0cb80aa27ce6cef25c689ef2dda8afc1fb093c0efbca3d65994205d5a3a48"} Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.569339 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50d0cb80aa27ce6cef25c689ef2dda8afc1fb093c0efbca3d65994205d5a3a48" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.769871 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.879497 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7f615e0-e12d-48da-8f84-a37b15b77580-kube-api-access\") pod \"c7f615e0-e12d-48da-8f84-a37b15b77580\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.879554 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7f615e0-e12d-48da-8f84-a37b15b77580-kubelet-dir\") pod \"c7f615e0-e12d-48da-8f84-a37b15b77580\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.879902 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7f615e0-e12d-48da-8f84-a37b15b77580-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c7f615e0-e12d-48da-8f84-a37b15b77580" (UID: "c7f615e0-e12d-48da-8f84-a37b15b77580"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.896223 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f615e0-e12d-48da-8f84-a37b15b77580-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c7f615e0-e12d-48da-8f84-a37b15b77580" (UID: "c7f615e0-e12d-48da-8f84-a37b15b77580"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.908518 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:50 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:50 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:50 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.908566 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.981703 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7f615e0-e12d-48da-8f84-a37b15b77580-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.981757 4675 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7f615e0-e12d-48da-8f84-a37b15b77580-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.515801 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.523108 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.583384 4675 generic.go:334] "Generic (PLEG): container finished" podID="0835f7bf-325d-42e6-bc79-9c65c68ba95e" containerID="3691e01518f49140b2c391402b8660f9fda43695227e8e0f2102580109fb95bd" exitCode=0 Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.583756 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0835f7bf-325d-42e6-bc79-9c65c68ba95e","Type":"ContainerDied","Data":"3691e01518f49140b2c391402b8660f9fda43695227e8e0f2102580109fb95bd"} Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.596304 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.596377 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c7f615e0-e12d-48da-8f84-a37b15b77580","Type":"ContainerDied","Data":"f51015446de6a0acf88d6eedc44df6fc45c130e0b6934676c9bca0f1932b28fa"} Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.596418 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f51015446de6a0acf88d6eedc44df6fc45c130e0b6934676c9bca0f1932b28fa" Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.906297 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:51 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:51 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:51 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.906346 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.877670 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.905358 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:52 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:52 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:52 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.905425 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.942688 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kube-api-access\") pod \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.944345 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kubelet-dir\") pod \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.944641 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0835f7bf-325d-42e6-bc79-9c65c68ba95e" (UID: "0835f7bf-325d-42e6-bc79-9c65c68ba95e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.962908 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0835f7bf-325d-42e6-bc79-9c65c68ba95e" (UID: "0835f7bf-325d-42e6-bc79-9c65c68ba95e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.046058 4675 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.046103 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.157309 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.610412 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0835f7bf-325d-42e6-bc79-9c65c68ba95e","Type":"ContainerDied","Data":"8b73efc6e7d667a46e0bf22c42d6682fa8615c1dbec75a0e4be976772c2d449c"} Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.610449 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b73efc6e7d667a46e0bf22c42d6682fa8615c1dbec75a0e4be976772c2d449c" Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.610514 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.905372 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:53 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:53 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:53 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.905424 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:54 crc kubenswrapper[4675]: I0124 06:55:54.905941 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:54 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:54 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:54 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:54 crc kubenswrapper[4675]: I0124 06:55:54.906336 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:55 crc kubenswrapper[4675]: I0124 06:55:55.905450 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:55 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:55 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:55 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:55 crc kubenswrapper[4675]: I0124 06:55:55.905510 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:56 crc kubenswrapper[4675]: I0124 06:55:56.905764 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:56 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:56 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:56 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:56 crc kubenswrapper[4675]: I0124 06:55:56.905829 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:56 crc kubenswrapper[4675]: I0124 06:55:56.915070 4675 patch_prober.go:28] interesting pod/console-f9d7485db-c64jl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 24 06:55:56 crc kubenswrapper[4675]: I0124 06:55:56.915107 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c64jl" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 24 06:55:57 crc kubenswrapper[4675]: I0124 06:55:57.707037 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:57 crc kubenswrapper[4675]: I0124 06:55:57.904675 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:57 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:57 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:57 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:57 crc kubenswrapper[4675]: I0124 06:55:57.904748 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:58 crc kubenswrapper[4675]: I0124 06:55:58.908229 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:58 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:58 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:58 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:58 crc kubenswrapper[4675]: I0124 06:55:58.908658 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:59 crc kubenswrapper[4675]: I0124 06:55:59.905946 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:59 crc kubenswrapper[4675]: I0124 06:55:59.909221 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:56:00 crc kubenswrapper[4675]: I0124 06:56:00.126870 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:56:00 crc kubenswrapper[4675]: I0124 06:56:00.146855 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:56:00 crc kubenswrapper[4675]: I0124 06:56:00.398088 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:56:06 crc kubenswrapper[4675]: I0124 06:56:06.919234 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:56:06 crc kubenswrapper[4675]: I0124 06:56:06.923060 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:56:08 crc kubenswrapper[4675]: I0124 06:56:08.207003 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:56:08 crc kubenswrapper[4675]: I0124 06:56:08.630238 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 06:56:08 crc kubenswrapper[4675]: I0124 06:56:08.630573 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 06:56:14 crc kubenswrapper[4675]: E0124 06:56:14.514567 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 24 06:56:14 crc kubenswrapper[4675]: E0124 06:56:14.515023 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm4jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-l7z59_openshift-marketplace(1165063b-e2f9-406a-86c7-0559c419d043): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 06:56:14 crc kubenswrapper[4675]: E0124 06:56:14.516188 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-l7z59" podUID="1165063b-e2f9-406a-86c7-0559c419d043" Jan 24 06:56:15 crc kubenswrapper[4675]: E0124 06:56:15.409025 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-l7z59" podUID="1165063b-e2f9-406a-86c7-0559c419d043" Jan 24 06:56:15 crc kubenswrapper[4675]: E0124 06:56:15.442574 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 24 06:56:15 crc kubenswrapper[4675]: E0124 06:56:15.442869 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6m6sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mrxqr_openshift-marketplace(69537bd3-d5fe-4baf-a1dc-16c366f2518b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 06:56:15 crc kubenswrapper[4675]: E0124 06:56:15.444290 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mrxqr" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" Jan 24 06:56:18 crc kubenswrapper[4675]: I0124 06:56:18.095521 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:56:19 crc kubenswrapper[4675]: E0124 06:56:19.778638 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mrxqr" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" Jan 24 06:56:19 crc kubenswrapper[4675]: E0124 06:56:19.795614 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 24 06:56:19 crc kubenswrapper[4675]: E0124 06:56:19.795764 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rz7td,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hrjxs_openshift-marketplace(b8bbe037-b253-4db3-b0f5-d02a51ca300e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 06:56:19 crc kubenswrapper[4675]: E0124 06:56:19.797044 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hrjxs" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" Jan 24 06:56:21 crc kubenswrapper[4675]: E0124 06:56:21.723916 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hrjxs" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.182251 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8mdgj"] Jan 24 06:56:22 crc kubenswrapper[4675]: W0124 06:56:22.186431 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b6e6bdc_02e8_45ac_b89d_caf409ba451e.slice/crio-0f57f249ebb00e35ae2db683a8d4da8453ba91d502b4752938920acf053e448a WatchSource:0}: Error finding container 0f57f249ebb00e35ae2db683a8d4da8453ba91d502b4752938920acf053e448a: Status 404 returned error can't find the container with id 0f57f249ebb00e35ae2db683a8d4da8453ba91d502b4752938920acf053e448a Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.927238 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerStarted","Data":"1e6ff790a8ef2685983150316ed55a0d1390d5076678d43aeeb0f36eeb83ccdd"} Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.931856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerStarted","Data":"b18e9709b62b8f7ea17174ebecf1128ccaae80aa9075eae9177465b80767c745"} Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.935162 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" event={"ID":"9b6e6bdc-02e8-45ac-b89d-caf409ba451e","Type":"ContainerStarted","Data":"0f57f249ebb00e35ae2db683a8d4da8453ba91d502b4752938920acf053e448a"} Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.936737 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerStarted","Data":"117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f"} Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.938271 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerStarted","Data":"566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc"} Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.941267 4675 generic.go:334] "Generic (PLEG): container finished" podID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerID="acc5dc0c07c3a0b5401b6f9bc7ce29ec56cf35e994494b4063328ab3e6990f50" exitCode=0 Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.941327 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f482d" event={"ID":"0eebacf7-e6c0-4fad-a868-ed067f1b1acc","Type":"ContainerDied","Data":"acc5dc0c07c3a0b5401b6f9bc7ce29ec56cf35e994494b4063328ab3e6990f50"} Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.954913 4675 generic.go:334] "Generic (PLEG): container finished" podID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerID="566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc" exitCode=0 Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.954998 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerDied","Data":"566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc"} Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.957272 4675 generic.go:334] "Generic (PLEG): container finished" podID="26a336bf-741a-462c-bafd-9ff5e4838956" containerID="1e6ff790a8ef2685983150316ed55a0d1390d5076678d43aeeb0f36eeb83ccdd" exitCode=0 Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.957914 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerDied","Data":"1e6ff790a8ef2685983150316ed55a0d1390d5076678d43aeeb0f36eeb83ccdd"} Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.966052 4675 generic.go:334] "Generic (PLEG): container finished" podID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerID="b18e9709b62b8f7ea17174ebecf1128ccaae80aa9075eae9177465b80767c745" exitCode=0 Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.966140 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerDied","Data":"b18e9709b62b8f7ea17174ebecf1128ccaae80aa9075eae9177465b80767c745"} Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.974308 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" event={"ID":"9b6e6bdc-02e8-45ac-b89d-caf409ba451e","Type":"ContainerStarted","Data":"c7e67da9454b525d59c4295c5e3eabc9ae1ed649eea092b8f8c6e9549f1859aa"} Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.981928 4675 generic.go:334] "Generic (PLEG): container finished" podID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerID="117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f" exitCode=0 Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.982064 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerDied","Data":"117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f"} Jan 24 06:56:24 crc kubenswrapper[4675]: I0124 06:56:24.989835 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" event={"ID":"9b6e6bdc-02e8-45ac-b89d-caf409ba451e","Type":"ContainerStarted","Data":"64a6bcc42b75551d2745f412f87cf946e31d602bd2f2c60f5bda5bc8334240a1"} Jan 24 06:56:26 crc kubenswrapper[4675]: I0124 06:56:26.452738 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.012416 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-8mdgj" podStartSLOduration=170.012395512 podStartE2EDuration="2m50.012395512s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:56:27.011334927 +0000 UTC m=+188.307440150" watchObservedRunningTime="2026-01-24 06:56:27.012395512 +0000 UTC m=+188.308500735" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.226661 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 06:56:27 crc kubenswrapper[4675]: E0124 06:56:27.226887 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0835f7bf-325d-42e6-bc79-9c65c68ba95e" containerName="pruner" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.226898 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0835f7bf-325d-42e6-bc79-9c65c68ba95e" containerName="pruner" Jan 24 06:56:27 crc kubenswrapper[4675]: E0124 06:56:27.226906 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f615e0-e12d-48da-8f84-a37b15b77580" containerName="pruner" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.226911 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f615e0-e12d-48da-8f84-a37b15b77580" containerName="pruner" Jan 24 06:56:27 crc kubenswrapper[4675]: E0124 06:56:27.226929 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4201e4-a1e0-4256-aa5a-67383ee87bee" containerName="collect-profiles" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.226935 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4201e4-a1e0-4256-aa5a-67383ee87bee" containerName="collect-profiles" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.229402 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f615e0-e12d-48da-8f84-a37b15b77580" containerName="pruner" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.229427 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b4201e4-a1e0-4256-aa5a-67383ee87bee" containerName="collect-profiles" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.229441 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0835f7bf-325d-42e6-bc79-9c65c68ba95e" containerName="pruner" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.229838 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.235856 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.235897 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.239760 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.287017 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.287307 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.389367 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.389470 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.389758 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.412466 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.553809 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.934002 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 06:56:28 crc kubenswrapper[4675]: I0124 06:56:28.007515 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f482d" event={"ID":"0eebacf7-e6c0-4fad-a868-ed067f1b1acc","Type":"ContainerStarted","Data":"c15fa43487111e1d485b4196d8624a7f782747a8f0a151642273c5900aacb11c"} Jan 24 06:56:28 crc kubenswrapper[4675]: I0124 06:56:28.008893 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69","Type":"ContainerStarted","Data":"c42d4e6d79e6a249b3f19fb30c780fa3f18850b819ae46c1168daf382a770be1"} Jan 24 06:56:28 crc kubenswrapper[4675]: I0124 06:56:28.023939 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f482d" podStartSLOduration=3.074326582 podStartE2EDuration="41.023922702s" podCreationTimestamp="2026-01-24 06:55:47 +0000 UTC" firstStartedPulling="2026-01-24 06:55:49.528490916 +0000 UTC m=+150.824596139" lastFinishedPulling="2026-01-24 06:56:27.478087036 +0000 UTC m=+188.774192259" observedRunningTime="2026-01-24 06:56:28.020741942 +0000 UTC m=+189.316847185" watchObservedRunningTime="2026-01-24 06:56:28.023922702 +0000 UTC m=+189.320027925" Jan 24 06:56:30 crc kubenswrapper[4675]: I0124 06:56:30.035871 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerStarted","Data":"acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a"} Jan 24 06:56:30 crc kubenswrapper[4675]: I0124 06:56:30.037229 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69","Type":"ContainerStarted","Data":"486ff467a3dcbf3710daad231b77c4c48f4036fea51d9faa8b991fb420a9aa34"} Jan 24 06:56:30 crc kubenswrapper[4675]: I0124 06:56:30.061473 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gtt58" podStartSLOduration=3.575059909 podStartE2EDuration="44.061456071s" podCreationTimestamp="2026-01-24 06:55:46 +0000 UTC" firstStartedPulling="2026-01-24 06:55:48.407911962 +0000 UTC m=+149.704017185" lastFinishedPulling="2026-01-24 06:56:28.894308124 +0000 UTC m=+190.190413347" observedRunningTime="2026-01-24 06:56:30.05578418 +0000 UTC m=+191.351889403" watchObservedRunningTime="2026-01-24 06:56:30.061456071 +0000 UTC m=+191.357561294" Jan 24 06:56:30 crc kubenswrapper[4675]: I0124 06:56:30.075266 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.075242446 podStartE2EDuration="3.075242446s" podCreationTimestamp="2026-01-24 06:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:56:30.072650371 +0000 UTC m=+191.368755604" watchObservedRunningTime="2026-01-24 06:56:30.075242446 +0000 UTC m=+191.371347679" Jan 24 06:56:31 crc kubenswrapper[4675]: I0124 06:56:31.049547 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerStarted","Data":"ad43223b1b4489aeea4bb97c6915fdb5cd57e53e431523d75be8478f75c6178f"} Jan 24 06:56:32 crc kubenswrapper[4675]: I0124 06:56:32.061444 4675 generic.go:334] "Generic (PLEG): container finished" podID="8e26c6b2-31e3-46e5-a9ad-e74ffac10e69" containerID="486ff467a3dcbf3710daad231b77c4c48f4036fea51d9faa8b991fb420a9aa34" exitCode=0 Jan 24 06:56:32 crc kubenswrapper[4675]: I0124 06:56:32.062801 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69","Type":"ContainerDied","Data":"486ff467a3dcbf3710daad231b77c4c48f4036fea51d9faa8b991fb420a9aa34"} Jan 24 06:56:32 crc kubenswrapper[4675]: I0124 06:56:32.083899 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gmxj8" podStartSLOduration=5.297039704 podStartE2EDuration="47.083882013s" podCreationTimestamp="2026-01-24 06:55:45 +0000 UTC" firstStartedPulling="2026-01-24 06:55:48.358666682 +0000 UTC m=+149.654771905" lastFinishedPulling="2026-01-24 06:56:30.145508991 +0000 UTC m=+191.441614214" observedRunningTime="2026-01-24 06:56:32.080326435 +0000 UTC m=+193.376431648" watchObservedRunningTime="2026-01-24 06:56:32.083882013 +0000 UTC m=+193.379987236" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.446279 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.465409 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kubelet-dir\") pod \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.465522 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8e26c6b2-31e3-46e5-a9ad-e74ffac10e69" (UID: "8e26c6b2-31e3-46e5-a9ad-e74ffac10e69"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.465564 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kube-api-access\") pod \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.465770 4675 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.470925 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8e26c6b2-31e3-46e5-a9ad-e74ffac10e69" (UID: "8e26c6b2-31e3-46e5-a9ad-e74ffac10e69"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.567495 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.631594 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 06:56:33 crc kubenswrapper[4675]: E0124 06:56:33.632625 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e26c6b2-31e3-46e5-a9ad-e74ffac10e69" containerName="pruner" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.632650 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e26c6b2-31e3-46e5-a9ad-e74ffac10e69" containerName="pruner" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.632793 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e26c6b2-31e3-46e5-a9ad-e74ffac10e69" containerName="pruner" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.633352 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.645266 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.668955 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kube-api-access\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.669032 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-var-lock\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.669152 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.769706 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.769767 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kube-api-access\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.769787 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-var-lock\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.769823 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.769855 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-var-lock\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.796559 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kube-api-access\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.952876 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.071743 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69","Type":"ContainerDied","Data":"c42d4e6d79e6a249b3f19fb30c780fa3f18850b819ae46c1168daf382a770be1"} Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.071791 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42d4e6d79e6a249b3f19fb30c780fa3f18850b819ae46c1168daf382a770be1" Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.071872 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.082522 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerStarted","Data":"58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc"} Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.096563 4675 generic.go:334] "Generic (PLEG): container finished" podID="1165063b-e2f9-406a-86c7-0559c419d043" containerID="42b8ee55bd339ab55f41df3ff58f52b52b0d7e8bb773f48fda829b8f6ab4ed80" exitCode=0 Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.096626 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7z59" event={"ID":"1165063b-e2f9-406a-86c7-0559c419d043","Type":"ContainerDied","Data":"42b8ee55bd339ab55f41df3ff58f52b52b0d7e8bb773f48fda829b8f6ab4ed80"} Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.138629 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerStarted","Data":"a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376"} Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.142290 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ljvrz" podStartSLOduration=2.244527604 podStartE2EDuration="45.142276345s" podCreationTimestamp="2026-01-24 06:55:49 +0000 UTC" firstStartedPulling="2026-01-24 06:55:50.535230645 +0000 UTC m=+151.831335868" lastFinishedPulling="2026-01-24 06:56:33.432979386 +0000 UTC m=+194.729084609" observedRunningTime="2026-01-24 06:56:34.10726307 +0000 UTC m=+195.403368303" watchObservedRunningTime="2026-01-24 06:56:34.142276345 +0000 UTC m=+195.438381558" Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.168397 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6vjtj" podStartSLOduration=3.273827625 podStartE2EDuration="46.168293894s" podCreationTimestamp="2026-01-24 06:55:48 +0000 UTC" firstStartedPulling="2026-01-24 06:55:50.545124572 +0000 UTC m=+151.841229795" lastFinishedPulling="2026-01-24 06:56:33.439590841 +0000 UTC m=+194.735696064" observedRunningTime="2026-01-24 06:56:34.165412732 +0000 UTC m=+195.461517955" watchObservedRunningTime="2026-01-24 06:56:34.168293894 +0000 UTC m=+195.464399127" Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.421276 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.145710 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d4aab5c-f99b-43e8-84b3-6ced30ef8023","Type":"ContainerStarted","Data":"83da16f3181e756a8f5edd35c942b704ad650f66f9209eff0332ba37056e3ddb"} Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.146056 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d4aab5c-f99b-43e8-84b3-6ced30ef8023","Type":"ContainerStarted","Data":"81112f58abc96693bfdadaf20511335609d3187604257dd97b45e9ebeca9ec56"} Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.148990 4675 generic.go:334] "Generic (PLEG): container finished" podID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerID="eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc" exitCode=0 Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.149043 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrxqr" event={"ID":"69537bd3-d5fe-4baf-a1dc-16c366f2518b","Type":"ContainerDied","Data":"eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc"} Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.151849 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7z59" event={"ID":"1165063b-e2f9-406a-86c7-0559c419d043","Type":"ContainerStarted","Data":"f7da7a6ea16001ac283a93fad40c7a51c75e4a7c85df4e2f006edf1afdc05e6b"} Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.171228 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.171210138 podStartE2EDuration="2.171210138s" podCreationTimestamp="2026-01-24 06:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:56:35.170969632 +0000 UTC m=+196.467074855" watchObservedRunningTime="2026-01-24 06:56:35.171210138 +0000 UTC m=+196.467315351" Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.233265 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l7z59" podStartSLOduration=2.937969624 podStartE2EDuration="50.233249679s" podCreationTimestamp="2026-01-24 06:55:45 +0000 UTC" firstStartedPulling="2026-01-24 06:55:47.316292353 +0000 UTC m=+148.612397576" lastFinishedPulling="2026-01-24 06:56:34.611572408 +0000 UTC m=+195.907677631" observedRunningTime="2026-01-24 06:56:35.229286499 +0000 UTC m=+196.525391732" watchObservedRunningTime="2026-01-24 06:56:35.233249679 +0000 UTC m=+196.529354902" Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.805439 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.805686 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.058986 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.059033 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.098937 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.159899 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrxqr" event={"ID":"69537bd3-d5fe-4baf-a1dc-16c366f2518b","Type":"ContainerStarted","Data":"28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe"} Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.227337 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.228843 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.228879 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.249604 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mrxqr" podStartSLOduration=3.952885058 podStartE2EDuration="51.249586648s" podCreationTimestamp="2026-01-24 06:55:45 +0000 UTC" firstStartedPulling="2026-01-24 06:55:48.476617789 +0000 UTC m=+149.772723012" lastFinishedPulling="2026-01-24 06:56:35.773319379 +0000 UTC m=+197.069424602" observedRunningTime="2026-01-24 06:56:36.201787104 +0000 UTC m=+197.497892327" watchObservedRunningTime="2026-01-24 06:56:36.249586648 +0000 UTC m=+197.545691871" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.434669 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.434711 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.471144 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.860316 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-l7z59" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="registry-server" probeResult="failure" output=< Jan 24 06:56:36 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 06:56:36 crc kubenswrapper[4675]: > Jan 24 06:56:37 crc kubenswrapper[4675]: I0124 06:56:37.213318 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:56:37 crc kubenswrapper[4675]: I0124 06:56:37.276016 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mrxqr" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="registry-server" probeResult="failure" output=< Jan 24 06:56:37 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 06:56:37 crc kubenswrapper[4675]: > Jan 24 06:56:37 crc kubenswrapper[4675]: I0124 06:56:37.905676 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:56:37 crc kubenswrapper[4675]: I0124 06:56:37.905760 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:56:37 crc kubenswrapper[4675]: I0124 06:56:37.947354 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.209316 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.518081 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtt58"] Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.630239 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.630304 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.630349 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.630902 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.631005 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82" gracePeriod=600 Jan 24 06:56:39 crc kubenswrapper[4675]: I0124 06:56:39.177858 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82" exitCode=0 Jan 24 06:56:39 crc kubenswrapper[4675]: I0124 06:56:39.177966 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82"} Jan 24 06:56:39 crc kubenswrapper[4675]: I0124 06:56:39.316543 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:56:39 crc kubenswrapper[4675]: I0124 06:56:39.316582 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:56:39 crc kubenswrapper[4675]: I0124 06:56:39.598374 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:56:39 crc kubenswrapper[4675]: I0124 06:56:39.598783 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:56:40 crc kubenswrapper[4675]: I0124 06:56:40.185200 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"2440d3731286e05d73f10f26f20752681529fdf7e75d4631c9fef808933d662e"} Jan 24 06:56:40 crc kubenswrapper[4675]: I0124 06:56:40.185319 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gtt58" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="registry-server" containerID="cri-o://acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a" gracePeriod=2 Jan 24 06:56:40 crc kubenswrapper[4675]: I0124 06:56:40.362018 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6vjtj" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="registry-server" probeResult="failure" output=< Jan 24 06:56:40 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 06:56:40 crc kubenswrapper[4675]: > Jan 24 06:56:40 crc kubenswrapper[4675]: I0124 06:56:40.639682 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ljvrz" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="registry-server" probeResult="failure" output=< Jan 24 06:56:40 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 06:56:40 crc kubenswrapper[4675]: > Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.020324 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.086867 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-catalog-content\") pod \"06c92a2c-0b68-4b8f-92b3-9688aef50674\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.086973 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6rbj\" (UniqueName: \"kubernetes.io/projected/06c92a2c-0b68-4b8f-92b3-9688aef50674-kube-api-access-z6rbj\") pod \"06c92a2c-0b68-4b8f-92b3-9688aef50674\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.087013 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-utilities\") pod \"06c92a2c-0b68-4b8f-92b3-9688aef50674\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.087848 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-utilities" (OuterVolumeSpecName: "utilities") pod "06c92a2c-0b68-4b8f-92b3-9688aef50674" (UID: "06c92a2c-0b68-4b8f-92b3-9688aef50674"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.092939 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c92a2c-0b68-4b8f-92b3-9688aef50674-kube-api-access-z6rbj" (OuterVolumeSpecName: "kube-api-access-z6rbj") pod "06c92a2c-0b68-4b8f-92b3-9688aef50674" (UID: "06c92a2c-0b68-4b8f-92b3-9688aef50674"). InnerVolumeSpecName "kube-api-access-z6rbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.142840 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06c92a2c-0b68-4b8f-92b3-9688aef50674" (UID: "06c92a2c-0b68-4b8f-92b3-9688aef50674"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.188490 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.188528 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6rbj\" (UniqueName: \"kubernetes.io/projected/06c92a2c-0b68-4b8f-92b3-9688aef50674-kube-api-access-z6rbj\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.188543 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.196347 4675 generic.go:334] "Generic (PLEG): container finished" podID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerID="aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf" exitCode=0 Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.196411 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrjxs" event={"ID":"b8bbe037-b253-4db3-b0f5-d02a51ca300e","Type":"ContainerDied","Data":"aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf"} Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.198539 4675 generic.go:334] "Generic (PLEG): container finished" podID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerID="acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a" exitCode=0 Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.198614 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerDied","Data":"acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a"} Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.198623 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.198641 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerDied","Data":"bbfda04626fc4ed4b4d8b4cd5fb06a28fc11f612333f3084a7f09445e8606bac"} Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.198661 4675 scope.go:117] "RemoveContainer" containerID="acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.223144 4675 scope.go:117] "RemoveContainer" containerID="566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.236141 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtt58"] Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.240869 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gtt58"] Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.246107 4675 scope.go:117] "RemoveContainer" containerID="c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.264658 4675 scope.go:117] "RemoveContainer" containerID="acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a" Jan 24 06:56:42 crc kubenswrapper[4675]: E0124 06:56:42.265065 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a\": container with ID starting with acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a not found: ID does not exist" containerID="acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.265112 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a"} err="failed to get container status \"acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a\": rpc error: code = NotFound desc = could not find container \"acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a\": container with ID starting with acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a not found: ID does not exist" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.265137 4675 scope.go:117] "RemoveContainer" containerID="566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc" Jan 24 06:56:42 crc kubenswrapper[4675]: E0124 06:56:42.269755 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc\": container with ID starting with 566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc not found: ID does not exist" containerID="566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.269790 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc"} err="failed to get container status \"566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc\": rpc error: code = NotFound desc = could not find container \"566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc\": container with ID starting with 566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc not found: ID does not exist" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.269811 4675 scope.go:117] "RemoveContainer" containerID="c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1" Jan 24 06:56:42 crc kubenswrapper[4675]: E0124 06:56:42.270161 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1\": container with ID starting with c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1 not found: ID does not exist" containerID="c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.270239 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1"} err="failed to get container status \"c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1\": rpc error: code = NotFound desc = could not find container \"c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1\": container with ID starting with c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1 not found: ID does not exist" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.948474 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" path="/var/lib/kubelet/pods/06c92a2c-0b68-4b8f-92b3-9688aef50674/volumes" Jan 24 06:56:43 crc kubenswrapper[4675]: I0124 06:56:43.204317 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrjxs" event={"ID":"b8bbe037-b253-4db3-b0f5-d02a51ca300e","Type":"ContainerStarted","Data":"be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d"} Jan 24 06:56:43 crc kubenswrapper[4675]: I0124 06:56:43.225384 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hrjxs" podStartSLOduration=3.070550489 podStartE2EDuration="56.225362652s" podCreationTimestamp="2026-01-24 06:55:47 +0000 UTC" firstStartedPulling="2026-01-24 06:55:49.499115242 +0000 UTC m=+150.795220465" lastFinishedPulling="2026-01-24 06:56:42.653927405 +0000 UTC m=+203.950032628" observedRunningTime="2026-01-24 06:56:43.222987773 +0000 UTC m=+204.519092996" watchObservedRunningTime="2026-01-24 06:56:43.225362652 +0000 UTC m=+204.521467875" Jan 24 06:56:45 crc kubenswrapper[4675]: I0124 06:56:45.856573 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:56:45 crc kubenswrapper[4675]: I0124 06:56:45.889816 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:56:46 crc kubenswrapper[4675]: I0124 06:56:46.278362 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:56:46 crc kubenswrapper[4675]: I0124 06:56:46.317444 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:56:46 crc kubenswrapper[4675]: I0124 06:56:46.718817 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mrxqr"] Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.236649 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mrxqr" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="registry-server" containerID="cri-o://28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe" gracePeriod=2 Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.281068 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.281117 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.337113 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.803633 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.867147 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m6sc\" (UniqueName: \"kubernetes.io/projected/69537bd3-d5fe-4baf-a1dc-16c366f2518b-kube-api-access-6m6sc\") pod \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.867402 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-utilities\") pod \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.868072 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-utilities" (OuterVolumeSpecName: "utilities") pod "69537bd3-d5fe-4baf-a1dc-16c366f2518b" (UID: "69537bd3-d5fe-4baf-a1dc-16c366f2518b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.883614 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69537bd3-d5fe-4baf-a1dc-16c366f2518b-kube-api-access-6m6sc" (OuterVolumeSpecName: "kube-api-access-6m6sc") pod "69537bd3-d5fe-4baf-a1dc-16c366f2518b" (UID: "69537bd3-d5fe-4baf-a1dc-16c366f2518b"). InnerVolumeSpecName "kube-api-access-6m6sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.967790 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-catalog-content\") pod \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.968090 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m6sc\" (UniqueName: \"kubernetes.io/projected/69537bd3-d5fe-4baf-a1dc-16c366f2518b-kube-api-access-6m6sc\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.968118 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.019563 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69537bd3-d5fe-4baf-a1dc-16c366f2518b" (UID: "69537bd3-d5fe-4baf-a1dc-16c366f2518b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.069149 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.247829 4675 generic.go:334] "Generic (PLEG): container finished" podID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerID="28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe" exitCode=0 Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.248000 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.248103 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrxqr" event={"ID":"69537bd3-d5fe-4baf-a1dc-16c366f2518b","Type":"ContainerDied","Data":"28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe"} Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.248191 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrxqr" event={"ID":"69537bd3-d5fe-4baf-a1dc-16c366f2518b","Type":"ContainerDied","Data":"2ae5b1917c1a3d53b8698eeecb43dd8b9b50c129480e9082905b7fc129076739"} Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.248269 4675 scope.go:117] "RemoveContainer" containerID="28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.274007 4675 scope.go:117] "RemoveContainer" containerID="eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.303429 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mrxqr"] Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.309784 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mrxqr"] Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.314034 4675 scope.go:117] "RemoveContainer" containerID="e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.314214 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.341833 4675 scope.go:117] "RemoveContainer" containerID="28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe" Jan 24 06:56:49 crc kubenswrapper[4675]: E0124 06:56:49.342310 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe\": container with ID starting with 28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe not found: ID does not exist" containerID="28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.342353 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe"} err="failed to get container status \"28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe\": rpc error: code = NotFound desc = could not find container \"28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe\": container with ID starting with 28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe not found: ID does not exist" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.342377 4675 scope.go:117] "RemoveContainer" containerID="eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc" Jan 24 06:56:49 crc kubenswrapper[4675]: E0124 06:56:49.342659 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc\": container with ID starting with eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc not found: ID does not exist" containerID="eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.342680 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc"} err="failed to get container status \"eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc\": rpc error: code = NotFound desc = could not find container \"eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc\": container with ID starting with eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc not found: ID does not exist" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.342691 4675 scope.go:117] "RemoveContainer" containerID="e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429" Jan 24 06:56:49 crc kubenswrapper[4675]: E0124 06:56:49.342913 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429\": container with ID starting with e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429 not found: ID does not exist" containerID="e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.342934 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429"} err="failed to get container status \"e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429\": rpc error: code = NotFound desc = could not find container \"e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429\": container with ID starting with e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429 not found: ID does not exist" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.372128 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.415380 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.635218 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.686609 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:56:50 crc kubenswrapper[4675]: I0124 06:56:50.949096 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" path="/var/lib/kubelet/pods/69537bd3-d5fe-4baf-a1dc-16c366f2518b/volumes" Jan 24 06:56:51 crc kubenswrapper[4675]: I0124 06:56:51.116393 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrjxs"] Jan 24 06:56:51 crc kubenswrapper[4675]: I0124 06:56:51.261834 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hrjxs" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="registry-server" containerID="cri-o://be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d" gracePeriod=2 Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.145410 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.267834 4675 generic.go:334] "Generic (PLEG): container finished" podID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerID="be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d" exitCode=0 Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.267884 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.267891 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrjxs" event={"ID":"b8bbe037-b253-4db3-b0f5-d02a51ca300e","Type":"ContainerDied","Data":"be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d"} Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.267928 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrjxs" event={"ID":"b8bbe037-b253-4db3-b0f5-d02a51ca300e","Type":"ContainerDied","Data":"6ebfde1b827bc87808d9c9e67a4bac6a8b9d56cf9e02df2ef1e3cfa3213e6b8f"} Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.267944 4675 scope.go:117] "RemoveContainer" containerID="be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.284025 4675 scope.go:117] "RemoveContainer" containerID="aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.303696 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz7td\" (UniqueName: \"kubernetes.io/projected/b8bbe037-b253-4db3-b0f5-d02a51ca300e-kube-api-access-rz7td\") pod \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.303749 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-utilities\") pod \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.303853 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-catalog-content\") pod \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.304978 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-utilities" (OuterVolumeSpecName: "utilities") pod "b8bbe037-b253-4db3-b0f5-d02a51ca300e" (UID: "b8bbe037-b253-4db3-b0f5-d02a51ca300e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.306824 4675 scope.go:117] "RemoveContainer" containerID="0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.309641 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bbe037-b253-4db3-b0f5-d02a51ca300e-kube-api-access-rz7td" (OuterVolumeSpecName: "kube-api-access-rz7td") pod "b8bbe037-b253-4db3-b0f5-d02a51ca300e" (UID: "b8bbe037-b253-4db3-b0f5-d02a51ca300e"). InnerVolumeSpecName "kube-api-access-rz7td". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.325134 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz7td\" (UniqueName: \"kubernetes.io/projected/b8bbe037-b253-4db3-b0f5-d02a51ca300e-kube-api-access-rz7td\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.325163 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.339063 4675 scope.go:117] "RemoveContainer" containerID="be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d" Jan 24 06:56:52 crc kubenswrapper[4675]: E0124 06:56:52.339478 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d\": container with ID starting with be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d not found: ID does not exist" containerID="be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.339573 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d"} err="failed to get container status \"be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d\": rpc error: code = NotFound desc = could not find container \"be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d\": container with ID starting with be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d not found: ID does not exist" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.339660 4675 scope.go:117] "RemoveContainer" containerID="aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf" Jan 24 06:56:52 crc kubenswrapper[4675]: E0124 06:56:52.339953 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf\": container with ID starting with aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf not found: ID does not exist" containerID="aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.339972 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf"} err="failed to get container status \"aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf\": rpc error: code = NotFound desc = could not find container \"aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf\": container with ID starting with aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf not found: ID does not exist" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.339985 4675 scope.go:117] "RemoveContainer" containerID="0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a" Jan 24 06:56:52 crc kubenswrapper[4675]: E0124 06:56:52.340265 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a\": container with ID starting with 0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a not found: ID does not exist" containerID="0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.340344 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a"} err="failed to get container status \"0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a\": rpc error: code = NotFound desc = could not find container \"0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a\": container with ID starting with 0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a not found: ID does not exist" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.367299 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8bbe037-b253-4db3-b0f5-d02a51ca300e" (UID: "b8bbe037-b253-4db3-b0f5-d02a51ca300e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.426660 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.596762 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrjxs"] Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.597889 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrjxs"] Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.951262 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" path="/var/lib/kubelet/pods/b8bbe037-b253-4db3-b0f5-d02a51ca300e/volumes" Jan 24 06:56:53 crc kubenswrapper[4675]: I0124 06:56:53.317881 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ljvrz"] Jan 24 06:56:53 crc kubenswrapper[4675]: I0124 06:56:53.318241 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ljvrz" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="registry-server" containerID="cri-o://58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc" gracePeriod=2 Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.163974 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.249053 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-utilities\") pod \"bb9cd470-4963-4979-b7f6-50a2969febf8\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.249126 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-catalog-content\") pod \"bb9cd470-4963-4979-b7f6-50a2969febf8\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.249174 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j2j9\" (UniqueName: \"kubernetes.io/projected/bb9cd470-4963-4979-b7f6-50a2969febf8-kube-api-access-7j2j9\") pod \"bb9cd470-4963-4979-b7f6-50a2969febf8\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.250118 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-utilities" (OuterVolumeSpecName: "utilities") pod "bb9cd470-4963-4979-b7f6-50a2969febf8" (UID: "bb9cd470-4963-4979-b7f6-50a2969febf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.258369 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9cd470-4963-4979-b7f6-50a2969febf8-kube-api-access-7j2j9" (OuterVolumeSpecName: "kube-api-access-7j2j9") pod "bb9cd470-4963-4979-b7f6-50a2969febf8" (UID: "bb9cd470-4963-4979-b7f6-50a2969febf8"). InnerVolumeSpecName "kube-api-access-7j2j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.286475 4675 generic.go:334] "Generic (PLEG): container finished" podID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerID="58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc" exitCode=0 Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.286524 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerDied","Data":"58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc"} Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.286539 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.286552 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerDied","Data":"782af23f4c0ebc3cdc7f1ab86c6d561da3e3393c3b0327b4b3a743335cbd6971"} Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.286571 4675 scope.go:117] "RemoveContainer" containerID="58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.302150 4675 scope.go:117] "RemoveContainer" containerID="117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.317446 4675 scope.go:117] "RemoveContainer" containerID="28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.335613 4675 scope.go:117] "RemoveContainer" containerID="58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc" Jan 24 06:56:54 crc kubenswrapper[4675]: E0124 06:56:54.335930 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc\": container with ID starting with 58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc not found: ID does not exist" containerID="58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.335953 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc"} err="failed to get container status \"58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc\": rpc error: code = NotFound desc = could not find container \"58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc\": container with ID starting with 58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc not found: ID does not exist" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.335973 4675 scope.go:117] "RemoveContainer" containerID="117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f" Jan 24 06:56:54 crc kubenswrapper[4675]: E0124 06:56:54.336138 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f\": container with ID starting with 117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f not found: ID does not exist" containerID="117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.336154 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f"} err="failed to get container status \"117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f\": rpc error: code = NotFound desc = could not find container \"117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f\": container with ID starting with 117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f not found: ID does not exist" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.336166 4675 scope.go:117] "RemoveContainer" containerID="28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0" Jan 24 06:56:54 crc kubenswrapper[4675]: E0124 06:56:54.336315 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0\": container with ID starting with 28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0 not found: ID does not exist" containerID="28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.336329 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0"} err="failed to get container status \"28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0\": rpc error: code = NotFound desc = could not find container \"28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0\": container with ID starting with 28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0 not found: ID does not exist" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.350002 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j2j9\" (UniqueName: \"kubernetes.io/projected/bb9cd470-4963-4979-b7f6-50a2969febf8-kube-api-access-7j2j9\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.350024 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.369207 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb9cd470-4963-4979-b7f6-50a2969febf8" (UID: "bb9cd470-4963-4979-b7f6-50a2969febf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.451528 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.619414 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ljvrz"] Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.625849 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ljvrz"] Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.951135 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" path="/var/lib/kubelet/pods/bb9cd470-4963-4979-b7f6-50a2969febf8/volumes" Jan 24 06:56:56 crc kubenswrapper[4675]: I0124 06:56:56.173903 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cnnh9"] Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.341550 4675 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.342651 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c" gracePeriod=15 Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.342688 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc" gracePeriod=15 Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.342783 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f" gracePeriod=15 Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.342637 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63" gracePeriod=15 Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.342757 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76" gracePeriod=15 Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344301 4675 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344677 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344735 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344750 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344762 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344810 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344823 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344842 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344852 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344892 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344904 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344921 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344932 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344971 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344982 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344998 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345009 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345021 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345057 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345070 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345081 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345095 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345105 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345144 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345155 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345166 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345177 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345192 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345202 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345238 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345249 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345263 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345273 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345287 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345323 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345338 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345349 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345362 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345373 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345609 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345629 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345663 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345682 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345700 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345714 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345759 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345775 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345789 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.346146 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.347934 4675 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.348650 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.354172 4675 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.392624 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424150 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424204 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424228 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424260 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424278 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424308 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424322 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424428 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.525836 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526282 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526325 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526039 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526427 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526373 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526469 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526491 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526430 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526574 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526609 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526639 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526645 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526693 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526752 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526843 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.693885 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: W0124 06:57:12.733088 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-294225f5b89dbcf1f3dfebde2c7ac7b8846f3de38ae134235387475511f58ca7 WatchSource:0}: Error finding container 294225f5b89dbcf1f3dfebde2c7ac7b8846f3de38ae134235387475511f58ca7: Status 404 returned error can't find the container with id 294225f5b89dbcf1f3dfebde2c7ac7b8846f3de38ae134235387475511f58ca7 Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.739991 4675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.68:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d986f5f7bfb49 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 06:57:12.737225545 +0000 UTC m=+234.033330808,LastTimestamp:2026-01-24 06:57:12.737225545 +0000 UTC m=+234.033330808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.409819 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.411540 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.412570 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c" exitCode=0 Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.412602 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f" exitCode=0 Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.412609 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc" exitCode=0 Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.412616 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76" exitCode=2 Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.412660 4675 scope.go:117] "RemoveContainer" containerID="e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.416039 4675 generic.go:334] "Generic (PLEG): container finished" podID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" containerID="83da16f3181e756a8f5edd35c942b704ad650f66f9209eff0332ba37056e3ddb" exitCode=0 Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.416099 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d4aab5c-f99b-43e8-84b3-6ced30ef8023","Type":"ContainerDied","Data":"83da16f3181e756a8f5edd35c942b704ad650f66f9209eff0332ba37056e3ddb"} Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.417125 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.417628 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.418169 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60"} Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.418200 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"294225f5b89dbcf1f3dfebde2c7ac7b8846f3de38ae134235387475511f58ca7"} Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.418772 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.419143 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.968468 4675 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.969345 4675 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.969694 4675 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.970170 4675 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.970417 4675 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.970442 4675 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.970781 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="200ms" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.987078 4675 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.68:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" volumeName="registry-storage" Jan 24 06:57:14 crc kubenswrapper[4675]: E0124 06:57:14.172273 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="400ms" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.486671 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 06:57:14 crc kubenswrapper[4675]: E0124 06:57:14.572910 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="800ms" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.782441 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.783668 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.784082 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.857671 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-var-lock\") pod \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.857808 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kubelet-dir\") pod \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.857862 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kube-api-access\") pod \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.857900 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-var-lock" (OuterVolumeSpecName: "var-lock") pod "9d4aab5c-f99b-43e8-84b3-6ced30ef8023" (UID: "9d4aab5c-f99b-43e8-84b3-6ced30ef8023"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.858060 4675 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-var-lock\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.858789 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9d4aab5c-f99b-43e8-84b3-6ced30ef8023" (UID: "9d4aab5c-f99b-43e8-84b3-6ced30ef8023"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.865564 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9d4aab5c-f99b-43e8-84b3-6ced30ef8023" (UID: "9d4aab5c-f99b-43e8-84b3-6ced30ef8023"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.958813 4675 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.958849 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.255327 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.256581 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.257184 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.257545 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.257931 4675 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.364327 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.364530 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.364555 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.364590 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.364575 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.364651 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.365308 4675 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.365345 4675 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.365362 4675 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.374944 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="1.6s" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.503366 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.504129 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63" exitCode=0 Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.504198 4675 scope.go:117] "RemoveContainer" containerID="3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.504201 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.508922 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.509006 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d4aab5c-f99b-43e8-84b3-6ced30ef8023","Type":"ContainerDied","Data":"81112f58abc96693bfdadaf20511335609d3187604257dd97b45e9ebeca9ec56"} Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.509050 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81112f58abc96693bfdadaf20511335609d3187604257dd97b45e9ebeca9ec56" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.512751 4675 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.513064 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.513253 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.521029 4675 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.522909 4675 scope.go:117] "RemoveContainer" containerID="f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.523094 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.523362 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.540460 4675 scope.go:117] "RemoveContainer" containerID="2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.556446 4675 scope.go:117] "RemoveContainer" containerID="ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.573701 4675 scope.go:117] "RemoveContainer" containerID="84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.595891 4675 scope.go:117] "RemoveContainer" containerID="7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.614912 4675 scope.go:117] "RemoveContainer" containerID="3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.615537 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\": container with ID starting with 3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c not found: ID does not exist" containerID="3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.615604 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c"} err="failed to get container status \"3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\": rpc error: code = NotFound desc = could not find container \"3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\": container with ID starting with 3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c not found: ID does not exist" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.615633 4675 scope.go:117] "RemoveContainer" containerID="f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.616147 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\": container with ID starting with f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f not found: ID does not exist" containerID="f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.616185 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f"} err="failed to get container status \"f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\": rpc error: code = NotFound desc = could not find container \"f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\": container with ID starting with f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f not found: ID does not exist" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.616218 4675 scope.go:117] "RemoveContainer" containerID="2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.616560 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\": container with ID starting with 2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc not found: ID does not exist" containerID="2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.616596 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc"} err="failed to get container status \"2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\": rpc error: code = NotFound desc = could not find container \"2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\": container with ID starting with 2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc not found: ID does not exist" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.616621 4675 scope.go:117] "RemoveContainer" containerID="ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.617096 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\": container with ID starting with ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76 not found: ID does not exist" containerID="ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.617124 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76"} err="failed to get container status \"ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\": rpc error: code = NotFound desc = could not find container \"ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\": container with ID starting with ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76 not found: ID does not exist" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.617141 4675 scope.go:117] "RemoveContainer" containerID="84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.617500 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\": container with ID starting with 84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63 not found: ID does not exist" containerID="84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.617537 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63"} err="failed to get container status \"84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\": rpc error: code = NotFound desc = could not find container \"84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\": container with ID starting with 84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63 not found: ID does not exist" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.617553 4675 scope.go:117] "RemoveContainer" containerID="7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.618853 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\": container with ID starting with 7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993 not found: ID does not exist" containerID="7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.618888 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993"} err="failed to get container status \"7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\": rpc error: code = NotFound desc = could not find container \"7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\": container with ID starting with 7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993 not found: ID does not exist" Jan 24 06:57:16 crc kubenswrapper[4675]: I0124 06:57:16.949700 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 24 06:57:16 crc kubenswrapper[4675]: E0124 06:57:16.979077 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="3.2s" Jan 24 06:57:18 crc kubenswrapper[4675]: I0124 06:57:18.946738 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:18 crc kubenswrapper[4675]: I0124 06:57:18.947362 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:20 crc kubenswrapper[4675]: E0124 06:57:20.179877 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="6.4s" Jan 24 06:57:20 crc kubenswrapper[4675]: E0124 06:57:20.630274 4675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.68:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d986f5f7bfb49 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 06:57:12.737225545 +0000 UTC m=+234.033330808,LastTimestamp:2026-01-24 06:57:12.737225545 +0000 UTC m=+234.033330808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.213119 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" podUID="00c16501-712c-4b60-a231-2a64e34ba677" containerName="oauth-openshift" containerID="cri-o://983c342a8cd6c22283e9b1583e5c4c4bb605f159419d665477ec69b008cd9cf9" gracePeriod=15 Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.554068 4675 generic.go:334] "Generic (PLEG): container finished" podID="00c16501-712c-4b60-a231-2a64e34ba677" containerID="983c342a8cd6c22283e9b1583e5c4c4bb605f159419d665477ec69b008cd9cf9" exitCode=0 Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.554442 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" event={"ID":"00c16501-712c-4b60-a231-2a64e34ba677","Type":"ContainerDied","Data":"983c342a8cd6c22283e9b1583e5c4c4bb605f159419d665477ec69b008cd9cf9"} Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.621044 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.621882 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.622384 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.622607 4675 status_manager.go:851] "Failed to get status for pod" podUID="00c16501-712c-4b60-a231-2a64e34ba677" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cnnh9\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.743968 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-provider-selection\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744016 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-idp-0-file-data\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744052 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-ocp-branding-template\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744103 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-service-ca\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744121 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-trusted-ca-bundle\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744157 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-router-certs\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744185 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9pvc\" (UniqueName: \"kubernetes.io/projected/00c16501-712c-4b60-a231-2a64e34ba677-kube-api-access-z9pvc\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744207 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-audit-policies\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744222 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-serving-cert\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744236 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-session\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744255 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-error\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744271 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-login\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744295 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-cliconfig\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744320 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00c16501-712c-4b60-a231-2a64e34ba677-audit-dir\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744540 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00c16501-712c-4b60-a231-2a64e34ba677-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.745645 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.745678 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.745768 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.750901 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.751285 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.751613 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.751764 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.751966 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c16501-712c-4b60-a231-2a64e34ba677-kube-api-access-z9pvc" (OuterVolumeSpecName: "kube-api-access-z9pvc") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "kube-api-access-z9pvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.752880 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.754308 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.756935 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.757441 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.757661 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846180 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846250 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846266 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846284 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846300 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846315 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846329 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9pvc\" (UniqueName: \"kubernetes.io/projected/00c16501-712c-4b60-a231-2a64e34ba677-kube-api-access-z9pvc\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846343 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846358 4675 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846372 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846386 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846398 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846412 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846423 4675 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00c16501-712c-4b60-a231-2a64e34ba677-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.560962 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" event={"ID":"00c16501-712c-4b60-a231-2a64e34ba677","Type":"ContainerDied","Data":"1baee155ca04c86836e94a8a309af90387ef167a0b3873a1f4bc0c4361aabb7d"} Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.561026 4675 scope.go:117] "RemoveContainer" containerID="983c342a8cd6c22283e9b1583e5c4c4bb605f159419d665477ec69b008cd9cf9" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.561034 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.561729 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.561965 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.562196 4675 status_manager.go:851] "Failed to get status for pod" podUID="00c16501-712c-4b60-a231-2a64e34ba677" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cnnh9\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.576835 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.577341 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.577577 4675 status_manager.go:851] "Failed to get status for pod" podUID="00c16501-712c-4b60-a231-2a64e34ba677" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cnnh9\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.941976 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.943269 4675 status_manager.go:851] "Failed to get status for pod" podUID="00c16501-712c-4b60-a231-2a64e34ba677" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cnnh9\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.944100 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.946674 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.968852 4675 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.968927 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:22 crc kubenswrapper[4675]: E0124 06:57:22.969612 4675 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.970288 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:23 crc kubenswrapper[4675]: W0124 06:57:23.000885 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-503c1090e6110d3d7af8af19f312a4fd01bc634c48279f01b41c4b04811e0713 WatchSource:0}: Error finding container 503c1090e6110d3d7af8af19f312a4fd01bc634c48279f01b41c4b04811e0713: Status 404 returned error can't find the container with id 503c1090e6110d3d7af8af19f312a4fd01bc634c48279f01b41c4b04811e0713 Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.570312 4675 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="007ada23f704ba5fbd4c2f68fa01a4bb3d3db9b89f7f0d81285e468c43163e13" exitCode=0 Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.570743 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"007ada23f704ba5fbd4c2f68fa01a4bb3d3db9b89f7f0d81285e468c43163e13"} Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.570947 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"503c1090e6110d3d7af8af19f312a4fd01bc634c48279f01b41c4b04811e0713"} Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.571252 4675 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.571267 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:23 crc kubenswrapper[4675]: E0124 06:57:23.572063 4675 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.572499 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.573943 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.574706 4675 status_manager.go:851] "Failed to get status for pod" podUID="00c16501-712c-4b60-a231-2a64e34ba677" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cnnh9\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:24 crc kubenswrapper[4675]: I0124 06:57:24.603242 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0af2a8ccaa93ccaa1c5fb434d0a748434a5f9f1c465d715413ff3b04c0987d59"} Jan 24 06:57:24 crc kubenswrapper[4675]: I0124 06:57:24.603293 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"68bc94f5de5bed6164773f05d00d486d1b89c2d89bd74bda82a43ee4512c8703"} Jan 24 06:57:24 crc kubenswrapper[4675]: I0124 06:57:24.603312 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ef178a185e52148e5c49caeab757ec5a1efa43677998a4493f6305c1fae100ba"} Jan 24 06:57:24 crc kubenswrapper[4675]: I0124 06:57:24.603326 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d1a6191fa655b8a433ded86c6d81e85a609a9cf0f552ebbb9578065043d24893"} Jan 24 06:57:25 crc kubenswrapper[4675]: I0124 06:57:25.610821 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a461428d95a1131ff86c18a1dbbdcea583aa8b75e845c4d8415c5303c28b6a0e"} Jan 24 06:57:25 crc kubenswrapper[4675]: I0124 06:57:25.611347 4675 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:25 crc kubenswrapper[4675]: I0124 06:57:25.611359 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:25 crc kubenswrapper[4675]: I0124 06:57:25.611575 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:26 crc kubenswrapper[4675]: I0124 06:57:26.623536 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 24 06:57:26 crc kubenswrapper[4675]: I0124 06:57:26.623598 4675 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a" exitCode=1 Jan 24 06:57:26 crc kubenswrapper[4675]: I0124 06:57:26.623626 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a"} Jan 24 06:57:26 crc kubenswrapper[4675]: I0124 06:57:26.624072 4675 scope.go:117] "RemoveContainer" containerID="f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a" Jan 24 06:57:27 crc kubenswrapper[4675]: I0124 06:57:27.638823 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 24 06:57:27 crc kubenswrapper[4675]: I0124 06:57:27.638937 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f62a39af41c2cb5054295c1a689b581d284c46d4df0acc115f7c90ad8f408297"} Jan 24 06:57:27 crc kubenswrapper[4675]: I0124 06:57:27.970775 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:27 crc kubenswrapper[4675]: I0124 06:57:27.970858 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:27 crc kubenswrapper[4675]: I0124 06:57:27.979856 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:30 crc kubenswrapper[4675]: I0124 06:57:30.620230 4675 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:30 crc kubenswrapper[4675]: I0124 06:57:30.655580 4675 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:30 crc kubenswrapper[4675]: I0124 06:57:30.655614 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:30 crc kubenswrapper[4675]: I0124 06:57:30.659882 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:30 crc kubenswrapper[4675]: I0124 06:57:30.726241 4675 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="60a60daf-8592-40a0-bd7d-188a46a49628" Jan 24 06:57:31 crc kubenswrapper[4675]: I0124 06:57:31.443223 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:57:31 crc kubenswrapper[4675]: I0124 06:57:31.660358 4675 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:31 crc kubenswrapper[4675]: I0124 06:57:31.660390 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:31 crc kubenswrapper[4675]: I0124 06:57:31.663556 4675 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="60a60daf-8592-40a0-bd7d-188a46a49628" Jan 24 06:57:32 crc kubenswrapper[4675]: I0124 06:57:32.842023 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:57:32 crc kubenswrapper[4675]: I0124 06:57:32.842295 4675 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 24 06:57:32 crc kubenswrapper[4675]: I0124 06:57:32.842362 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 24 06:57:36 crc kubenswrapper[4675]: I0124 06:57:36.868838 4675 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 24 06:57:37 crc kubenswrapper[4675]: I0124 06:57:37.075378 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 24 06:57:37 crc kubenswrapper[4675]: I0124 06:57:37.380774 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 24 06:57:37 crc kubenswrapper[4675]: I0124 06:57:37.401064 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.219520 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.264936 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.327220 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.344840 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.390782 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.409233 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.453496 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.519947 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.607520 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.642490 4675 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.643450 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=26.643433783 podStartE2EDuration="26.643433783s" podCreationTimestamp="2026-01-24 06:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:57:30.692585236 +0000 UTC m=+251.988690479" watchObservedRunningTime="2026-01-24 06:57:38.643433783 +0000 UTC m=+259.939539016" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.648164 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cnnh9","openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.648227 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.648631 4675 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.648662 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.654149 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.666076 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.698193 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=8.69817049 podStartE2EDuration="8.69817049s" podCreationTimestamp="2026-01-24 06:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:57:38.6725309 +0000 UTC m=+259.968636153" watchObservedRunningTime="2026-01-24 06:57:38.69817049 +0000 UTC m=+259.994275753" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.886230 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.892413 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.949245 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c16501-712c-4b60-a231-2a64e34ba677" path="/var/lib/kubelet/pods/00c16501-712c-4b60-a231-2a64e34ba677/volumes" Jan 24 06:57:39 crc kubenswrapper[4675]: I0124 06:57:39.461250 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 24 06:57:39 crc kubenswrapper[4675]: I0124 06:57:39.700748 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 24 06:57:39 crc kubenswrapper[4675]: I0124 06:57:39.789433 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 24 06:57:39 crc kubenswrapper[4675]: I0124 06:57:39.816653 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.056229 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.146376 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.226613 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.328237 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.341798 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.494660 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.574429 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.156815 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.209972 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.491151 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.494515 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.914537 4675 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.915063 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60" gracePeriod=5 Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.981623 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.051668 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.176774 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.232028 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.262183 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.343656 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.434394 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.526570 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.560133 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.637057 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.717518 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.806598 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.842778 4675 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.842868 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.873114 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.918293 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.055343 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.063962 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.078484 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.164507 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.256015 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.310505 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.399784 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.594656 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.126910 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.189964 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.260417 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.367790 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.421017 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.492656 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.667946 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.780174 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.093017 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.176497 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.211104 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.260151 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.356929 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.388108 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.411240 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.419418 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.728650 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.088082 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.114504 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.215675 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.302101 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.380357 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.384617 4675 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.447442 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.691349 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.829440 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.993904 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.034249 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.134465 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.152090 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.204062 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.273554 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.323823 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.417584 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.456846 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.480018 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.480088 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.515160 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.616869 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.616929 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617007 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617045 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617077 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617138 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617205 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617205 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617292 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617445 4675 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617457 4675 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617468 4675 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617476 4675 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.630575 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.651785 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.718456 4675 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.767964 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.768018 4675 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60" exitCode=137 Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.768066 4675 scope.go:117] "RemoveContainer" containerID="f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.768150 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.787264 4675 scope.go:117] "RemoveContainer" containerID="f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60" Jan 24 06:57:47 crc kubenswrapper[4675]: E0124 06:57:47.787735 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60\": container with ID starting with f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60 not found: ID does not exist" containerID="f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.787782 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60"} err="failed to get container status \"f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60\": rpc error: code = NotFound desc = could not find container \"f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60\": container with ID starting with f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60 not found: ID does not exist" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.810358 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.818244 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.962212 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.971683 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.152044 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.180523 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.261176 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.274835 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.298891 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.343598 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.440533 4675 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.492826 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.649122 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.705266 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.721381 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.856559 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.920936 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.948038 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.950177 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.950566 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.961359 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.961396 4675 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8c295437-e946-4237-8285-f1bbed0e47e1" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.965666 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.965689 4675 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8c295437-e946-4237-8285-f1bbed0e47e1" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.087851 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.111140 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.142748 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.247356 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.331217 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.409658 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.461428 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.753690 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.796948 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.866582 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.056542 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.084147 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.105221 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.180206 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.227945 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.229831 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.261663 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.266371 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.432139 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.496312 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.689303 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.716384 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.887017 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.900381 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.953075 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.994394 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.081525 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.317759 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.393910 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.403659 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.560618 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.702826 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.759279 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.786680 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.790513 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.822501 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.824544 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.905322 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.928692 4675 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.961520 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.011013 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.037796 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.061198 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.146196 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.223806 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.347196 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.361404 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.402853 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.514006 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.588532 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.640922 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.694810 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.799565 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.842797 4675 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.842871 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.842959 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.844009 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"f62a39af41c2cb5054295c1a689b581d284c46d4df0acc115f7c90ad8f408297"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.844269 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://f62a39af41c2cb5054295c1a689b581d284c46d4df0acc115f7c90ad8f408297" gracePeriod=30 Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.856522 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.858273 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6f67d677dd-tb9pj"] Jan 24 06:57:52 crc kubenswrapper[4675]: E0124 06:57:52.858677 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.858805 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 06:57:52 crc kubenswrapper[4675]: E0124 06:57:52.858906 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" containerName="installer" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.858997 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" containerName="installer" Jan 24 06:57:52 crc kubenswrapper[4675]: E0124 06:57:52.859078 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c16501-712c-4b60-a231-2a64e34ba677" containerName="oauth-openshift" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.859148 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c16501-712c-4b60-a231-2a64e34ba677" containerName="oauth-openshift" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.859340 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c16501-712c-4b60-a231-2a64e34ba677" containerName="oauth-openshift" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.859446 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.859531 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" containerName="installer" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.860052 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.863270 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.863528 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.863774 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.865442 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.869701 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.871203 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.871480 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.871680 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.871930 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.874176 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.875505 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.875652 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.877026 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f67d677dd-tb9pj"] Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.879658 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.883908 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.891796 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974510 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974594 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974652 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974688 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974751 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974793 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-audit-policies\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974825 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rg42\" (UniqueName: \"kubernetes.io/projected/819880a6-f27d-4aab-9e8d-16326b87fcfc-kube-api-access-6rg42\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974878 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-session\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974911 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-login\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974939 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974962 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/819880a6-f27d-4aab-9e8d-16326b87fcfc-audit-dir\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974999 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.975039 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.975069 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-error\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.997571 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.015967 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076688 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076787 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076816 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-error\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076860 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076906 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076959 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076982 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077013 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077043 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-audit-policies\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077064 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rg42\" (UniqueName: \"kubernetes.io/projected/819880a6-f27d-4aab-9e8d-16326b87fcfc-kube-api-access-6rg42\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077098 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-session\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077123 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-login\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077151 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077174 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/819880a6-f27d-4aab-9e8d-16326b87fcfc-audit-dir\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.078516 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/819880a6-f27d-4aab-9e8d-16326b87fcfc-audit-dir\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.079444 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-audit-policies\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.079970 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.080253 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.080705 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.085118 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.085884 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-error\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.086986 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.088877 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.090078 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.090516 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-login\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.091772 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.092354 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-session\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.102817 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rg42\" (UniqueName: \"kubernetes.io/projected/819880a6-f27d-4aab-9e8d-16326b87fcfc-kube-api-access-6rg42\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.103564 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.192233 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.208357 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.276703 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.322855 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.323140 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.370138 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.400180 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.415404 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.433104 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.480909 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.521816 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.525948 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.533838 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.596811 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f67d677dd-tb9pj"] Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.667013 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.799524 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.805918 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" event={"ID":"819880a6-f27d-4aab-9e8d-16326b87fcfc","Type":"ContainerStarted","Data":"13465081a3a7b803b246ab1f6dd1b4186091f02257ac52b0ca329eaba9924e0a"} Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.858829 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.898441 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.958913 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.051917 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.185115 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.300811 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.314800 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.405935 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.494361 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.529467 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.573038 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.614979 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.716584 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.817312 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" event={"ID":"819880a6-f27d-4aab-9e8d-16326b87fcfc","Type":"ContainerStarted","Data":"6da88ca578b8f1849119c62d75920fb05e26f58d7434c56ef53cff8e9457d522"} Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.817669 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.829465 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.831539 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.835584 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.841352 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.857459 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" podStartSLOduration=58.857439062 podStartE2EDuration="58.857439062s" podCreationTimestamp="2026-01-24 06:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:57:54.851977722 +0000 UTC m=+276.148083015" watchObservedRunningTime="2026-01-24 06:57:54.857439062 +0000 UTC m=+276.153544295" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.894351 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.913046 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.951012 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.038794 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.186435 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.386558 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.513780 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.569769 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.601208 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.711647 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.736568 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.766821 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.782865 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.987610 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.015753 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.040886 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.081595 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.142963 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.521889 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.534348 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.680903 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.981224 4675 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.019627 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.090539 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.270510 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.295891 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.338755 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.470580 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.560229 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.673313 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.880343 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 06:57:58 crc kubenswrapper[4675]: I0124 06:57:58.165810 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 24 06:57:58 crc kubenswrapper[4675]: I0124 06:57:58.468959 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 24 06:57:58 crc kubenswrapper[4675]: I0124 06:57:58.508688 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 24 06:57:58 crc kubenswrapper[4675]: I0124 06:57:58.524853 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 24 06:57:59 crc kubenswrapper[4675]: I0124 06:57:59.034337 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 24 06:57:59 crc kubenswrapper[4675]: I0124 06:57:59.106034 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 24 06:57:59 crc kubenswrapper[4675]: I0124 06:57:59.304305 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 24 06:57:59 crc kubenswrapper[4675]: I0124 06:57:59.394885 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 06:57:59 crc kubenswrapper[4675]: I0124 06:57:59.490300 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 24 06:58:00 crc kubenswrapper[4675]: I0124 06:58:00.222074 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 24 06:58:18 crc kubenswrapper[4675]: I0124 06:58:18.820414 4675 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 24 06:58:22 crc kubenswrapper[4675]: I0124 06:58:22.968352 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 24 06:58:22 crc kubenswrapper[4675]: I0124 06:58:22.974931 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 24 06:58:22 crc kubenswrapper[4675]: I0124 06:58:22.974979 4675 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f62a39af41c2cb5054295c1a689b581d284c46d4df0acc115f7c90ad8f408297" exitCode=137 Jan 24 06:58:22 crc kubenswrapper[4675]: I0124 06:58:22.975007 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f62a39af41c2cb5054295c1a689b581d284c46d4df0acc115f7c90ad8f408297"} Jan 24 06:58:22 crc kubenswrapper[4675]: I0124 06:58:22.975046 4675 scope.go:117] "RemoveContainer" containerID="f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a" Jan 24 06:58:23 crc kubenswrapper[4675]: I0124 06:58:23.987639 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 24 06:58:23 crc kubenswrapper[4675]: I0124 06:58:23.989737 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f62c520aebd80fc3733322341b021355b2ab863b0291a2704d4c4ab9c661bf31"} Jan 24 06:58:31 crc kubenswrapper[4675]: I0124 06:58:31.442373 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:58:32 crc kubenswrapper[4675]: I0124 06:58:32.842255 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:58:32 crc kubenswrapper[4675]: I0124 06:58:32.847023 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:58:33 crc kubenswrapper[4675]: I0124 06:58:33.038161 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.267334 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tgs5c"] Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.269239 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" podUID="668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" containerName="controller-manager" containerID="cri-o://7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6" gracePeriod=30 Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.272569 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf"] Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.272921 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" podUID="5cea3fd8-8eb5-46e1-9991-ec1096d357e5" containerName="route-controller-manager" containerID="cri-o://d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9" gracePeriod=30 Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.624311 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.666118 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796536 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-client-ca\") pod \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796577 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-serving-cert\") pod \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796597 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdk9l\" (UniqueName: \"kubernetes.io/projected/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-kube-api-access-xdk9l\") pod \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796665 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvzrc\" (UniqueName: \"kubernetes.io/projected/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-kube-api-access-xvzrc\") pod \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796772 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-serving-cert\") pod \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796808 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-proxy-ca-bundles\") pod \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796830 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-config\") pod \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796853 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-client-ca\") pod \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796885 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-config\") pod \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.798042 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-config" (OuterVolumeSpecName: "config") pod "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" (UID: "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.798095 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" (UID: "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.798183 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-client-ca" (OuterVolumeSpecName: "client-ca") pod "5cea3fd8-8eb5-46e1-9991-ec1096d357e5" (UID: "5cea3fd8-8eb5-46e1-9991-ec1096d357e5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.798253 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-config" (OuterVolumeSpecName: "config") pod "5cea3fd8-8eb5-46e1-9991-ec1096d357e5" (UID: "5cea3fd8-8eb5-46e1-9991-ec1096d357e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.798357 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-client-ca" (OuterVolumeSpecName: "client-ca") pod "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" (UID: "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.802079 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-kube-api-access-xvzrc" (OuterVolumeSpecName: "kube-api-access-xvzrc") pod "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" (UID: "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d"). InnerVolumeSpecName "kube-api-access-xvzrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.802120 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" (UID: "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.802185 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-kube-api-access-xdk9l" (OuterVolumeSpecName: "kube-api-access-xdk9l") pod "5cea3fd8-8eb5-46e1-9991-ec1096d357e5" (UID: "5cea3fd8-8eb5-46e1-9991-ec1096d357e5"). InnerVolumeSpecName "kube-api-access-xdk9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.802701 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5cea3fd8-8eb5-46e1-9991-ec1096d357e5" (UID: "5cea3fd8-8eb5-46e1-9991-ec1096d357e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898697 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898802 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898830 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdk9l\" (UniqueName: \"kubernetes.io/projected/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-kube-api-access-xdk9l\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898860 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvzrc\" (UniqueName: \"kubernetes.io/projected/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-kube-api-access-xvzrc\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898886 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898909 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898962 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898991 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.899016 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.085793 4675 generic.go:334] "Generic (PLEG): container finished" podID="5cea3fd8-8eb5-46e1-9991-ec1096d357e5" containerID="d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9" exitCode=0 Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.085866 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" event={"ID":"5cea3fd8-8eb5-46e1-9991-ec1096d357e5","Type":"ContainerDied","Data":"d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9"} Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.085895 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" event={"ID":"5cea3fd8-8eb5-46e1-9991-ec1096d357e5","Type":"ContainerDied","Data":"d68bea8b00c026526be03f959939477c57040be0a40f40783ac0e65d642a96db"} Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.085913 4675 scope.go:117] "RemoveContainer" containerID="d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.086009 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.094739 4675 generic.go:334] "Generic (PLEG): container finished" podID="668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" containerID="7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6" exitCode=0 Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.094786 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" event={"ID":"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d","Type":"ContainerDied","Data":"7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6"} Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.094815 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" event={"ID":"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d","Type":"ContainerDied","Data":"2617a8d5990bca5860fe83af255dca72d1f078c4ac17075407e8e2d08aa3e5d0"} Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.094867 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.113755 4675 scope.go:117] "RemoveContainer" containerID="d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9" Jan 24 06:58:44 crc kubenswrapper[4675]: E0124 06:58:44.114088 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9\": container with ID starting with d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9 not found: ID does not exist" containerID="d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.114119 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9"} err="failed to get container status \"d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9\": rpc error: code = NotFound desc = could not find container \"d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9\": container with ID starting with d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9 not found: ID does not exist" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.114139 4675 scope.go:117] "RemoveContainer" containerID="7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.124572 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf"] Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.128641 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf"] Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.133282 4675 scope.go:117] "RemoveContainer" containerID="7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6" Jan 24 06:58:44 crc kubenswrapper[4675]: E0124 06:58:44.133707 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6\": container with ID starting with 7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6 not found: ID does not exist" containerID="7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.133781 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6"} err="failed to get container status \"7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6\": rpc error: code = NotFound desc = could not find container \"7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6\": container with ID starting with 7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6 not found: ID does not exist" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.134855 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tgs5c"] Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.138785 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tgs5c"] Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.950074 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cea3fd8-8eb5-46e1-9991-ec1096d357e5" path="/var/lib/kubelet/pods/5cea3fd8-8eb5-46e1-9991-ec1096d357e5/volumes" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.951288 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" path="/var/lib/kubelet/pods/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d/volumes" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.290424 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d69464474-s4wsb"] Jan 24 06:58:45 crc kubenswrapper[4675]: E0124 06:58:45.290734 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cea3fd8-8eb5-46e1-9991-ec1096d357e5" containerName="route-controller-manager" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.290747 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cea3fd8-8eb5-46e1-9991-ec1096d357e5" containerName="route-controller-manager" Jan 24 06:58:45 crc kubenswrapper[4675]: E0124 06:58:45.290763 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" containerName="controller-manager" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.290771 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" containerName="controller-manager" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.290874 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cea3fd8-8eb5-46e1-9991-ec1096d357e5" containerName="route-controller-manager" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.290895 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" containerName="controller-manager" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.291371 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.293810 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf"] Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.293833 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.293873 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.294623 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.299247 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.299576 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.299920 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.300171 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.300262 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.300317 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.300637 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.301082 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.301108 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.302064 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.302298 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.311694 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d69464474-s4wsb"] Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.335280 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf"] Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415081 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b80055b-992d-4252-9e05-54057a97a274-serving-cert\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415128 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b80055b-992d-4252-9e05-54057a97a274-config\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415160 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbqwt\" (UniqueName: \"kubernetes.io/projected/9a2ff8b9-89c4-4f23-863e-45f020ace61d-kube-api-access-sbqwt\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415181 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lkrn\" (UniqueName: \"kubernetes.io/projected/4b80055b-992d-4252-9e05-54057a97a274-kube-api-access-5lkrn\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415201 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-config\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415270 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b80055b-992d-4252-9e05-54057a97a274-client-ca\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415331 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-client-ca\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415437 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-proxy-ca-bundles\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415459 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2ff8b9-89c4-4f23-863e-45f020ace61d-serving-cert\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517016 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b80055b-992d-4252-9e05-54057a97a274-serving-cert\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517069 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b80055b-992d-4252-9e05-54057a97a274-config\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517091 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbqwt\" (UniqueName: \"kubernetes.io/projected/9a2ff8b9-89c4-4f23-863e-45f020ace61d-kube-api-access-sbqwt\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517109 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lkrn\" (UniqueName: \"kubernetes.io/projected/4b80055b-992d-4252-9e05-54057a97a274-kube-api-access-5lkrn\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517125 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-config\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517142 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b80055b-992d-4252-9e05-54057a97a274-client-ca\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517163 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-client-ca\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517219 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-proxy-ca-bundles\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517239 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2ff8b9-89c4-4f23-863e-45f020ace61d-serving-cert\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.518806 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-client-ca\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.518868 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b80055b-992d-4252-9e05-54057a97a274-client-ca\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.518996 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b80055b-992d-4252-9e05-54057a97a274-config\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.519199 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-proxy-ca-bundles\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.519277 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-config\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.522213 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b80055b-992d-4252-9e05-54057a97a274-serving-cert\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.522212 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2ff8b9-89c4-4f23-863e-45f020ace61d-serving-cert\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.534544 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lkrn\" (UniqueName: \"kubernetes.io/projected/4b80055b-992d-4252-9e05-54057a97a274-kube-api-access-5lkrn\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.542884 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbqwt\" (UniqueName: \"kubernetes.io/projected/9a2ff8b9-89c4-4f23-863e-45f020ace61d-kube-api-access-sbqwt\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.614579 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.625104 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.844462 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d69464474-s4wsb"] Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.892262 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf"] Jan 24 06:58:45 crc kubenswrapper[4675]: W0124 06:58:45.900128 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b80055b_992d_4252_9e05_54057a97a274.slice/crio-94747f22b14af3784fa52a4342749cd119cf9babc290c9557587a4f1b6cc3442 WatchSource:0}: Error finding container 94747f22b14af3784fa52a4342749cd119cf9babc290c9557587a4f1b6cc3442: Status 404 returned error can't find the container with id 94747f22b14af3784fa52a4342749cd119cf9babc290c9557587a4f1b6cc3442 Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.118742 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" event={"ID":"9a2ff8b9-89c4-4f23-863e-45f020ace61d","Type":"ContainerStarted","Data":"26fcfb0c650c6cecc42e1a08207ec4eefa8fb176b005855ed7ecb45aa844e6f6"} Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.119081 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.119094 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" event={"ID":"9a2ff8b9-89c4-4f23-863e-45f020ace61d","Type":"ContainerStarted","Data":"96e565553cb619087fdc5ef912bf40674e76c71df4883bea6af8efc4c2e49f0f"} Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.121398 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" event={"ID":"4b80055b-992d-4252-9e05-54057a97a274","Type":"ContainerStarted","Data":"71f59b8b0da8f254e0f3868e35321280d2af023900cdefe9338ec0c961ca6d40"} Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.121424 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" event={"ID":"4b80055b-992d-4252-9e05-54057a97a274","Type":"ContainerStarted","Data":"94747f22b14af3784fa52a4342749cd119cf9babc290c9557587a4f1b6cc3442"} Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.121639 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.124790 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.136630 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" podStartSLOduration=3.136612671 podStartE2EDuration="3.136612671s" podCreationTimestamp="2026-01-24 06:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:58:46.134661463 +0000 UTC m=+327.430766706" watchObservedRunningTime="2026-01-24 06:58:46.136612671 +0000 UTC m=+327.432717904" Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.188644 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" podStartSLOduration=3.188491475 podStartE2EDuration="3.188491475s" podCreationTimestamp="2026-01-24 06:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:58:46.186023526 +0000 UTC m=+327.482128759" watchObservedRunningTime="2026-01-24 06:58:46.188491475 +0000 UTC m=+327.484596698" Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.495579 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:59:00 crc kubenswrapper[4675]: I0124 06:59:00.308506 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d69464474-s4wsb"] Jan 24 06:59:00 crc kubenswrapper[4675]: I0124 06:59:00.309301 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" podUID="9a2ff8b9-89c4-4f23-863e-45f020ace61d" containerName="controller-manager" containerID="cri-o://26fcfb0c650c6cecc42e1a08207ec4eefa8fb176b005855ed7ecb45aa844e6f6" gracePeriod=30 Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.198627 4675 generic.go:334] "Generic (PLEG): container finished" podID="9a2ff8b9-89c4-4f23-863e-45f020ace61d" containerID="26fcfb0c650c6cecc42e1a08207ec4eefa8fb176b005855ed7ecb45aa844e6f6" exitCode=0 Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.198951 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" event={"ID":"9a2ff8b9-89c4-4f23-863e-45f020ace61d","Type":"ContainerDied","Data":"26fcfb0c650c6cecc42e1a08207ec4eefa8fb176b005855ed7ecb45aa844e6f6"} Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.352626 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.375989 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f869b48f5-5s42m"] Jan 24 06:59:01 crc kubenswrapper[4675]: E0124 06:59:01.376175 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2ff8b9-89c4-4f23-863e-45f020ace61d" containerName="controller-manager" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.376186 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2ff8b9-89c4-4f23-863e-45f020ace61d" containerName="controller-manager" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.376279 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2ff8b9-89c4-4f23-863e-45f020ace61d" containerName="controller-manager" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.376635 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.424503 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f869b48f5-5s42m"] Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519462 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-client-ca\") pod \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519531 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbqwt\" (UniqueName: \"kubernetes.io/projected/9a2ff8b9-89c4-4f23-863e-45f020ace61d-kube-api-access-sbqwt\") pod \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519572 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-proxy-ca-bundles\") pod \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519603 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-config\") pod \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519645 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2ff8b9-89c4-4f23-863e-45f020ace61d-serving-cert\") pod \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519826 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j96l6\" (UniqueName: \"kubernetes.io/projected/3d81b726-5276-4f17-aee8-ef3ec176c910-kube-api-access-j96l6\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519862 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-proxy-ca-bundles\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519893 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-config\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519909 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d81b726-5276-4f17-aee8-ef3ec176c910-serving-cert\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519924 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-client-ca\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.520488 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-config" (OuterVolumeSpecName: "config") pod "9a2ff8b9-89c4-4f23-863e-45f020ace61d" (UID: "9a2ff8b9-89c4-4f23-863e-45f020ace61d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.520802 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9a2ff8b9-89c4-4f23-863e-45f020ace61d" (UID: "9a2ff8b9-89c4-4f23-863e-45f020ace61d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.521412 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-client-ca" (OuterVolumeSpecName: "client-ca") pod "9a2ff8b9-89c4-4f23-863e-45f020ace61d" (UID: "9a2ff8b9-89c4-4f23-863e-45f020ace61d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.524743 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2ff8b9-89c4-4f23-863e-45f020ace61d-kube-api-access-sbqwt" (OuterVolumeSpecName: "kube-api-access-sbqwt") pod "9a2ff8b9-89c4-4f23-863e-45f020ace61d" (UID: "9a2ff8b9-89c4-4f23-863e-45f020ace61d"). InnerVolumeSpecName "kube-api-access-sbqwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.530534 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a2ff8b9-89c4-4f23-863e-45f020ace61d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9a2ff8b9-89c4-4f23-863e-45f020ace61d" (UID: "9a2ff8b9-89c4-4f23-863e-45f020ace61d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621212 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-client-ca\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621330 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j96l6\" (UniqueName: \"kubernetes.io/projected/3d81b726-5276-4f17-aee8-ef3ec176c910-kube-api-access-j96l6\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621375 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-proxy-ca-bundles\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621414 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-config\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621434 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d81b726-5276-4f17-aee8-ef3ec176c910-serving-cert\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621474 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2ff8b9-89c4-4f23-863e-45f020ace61d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621489 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621502 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbqwt\" (UniqueName: \"kubernetes.io/projected/9a2ff8b9-89c4-4f23-863e-45f020ace61d-kube-api-access-sbqwt\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621516 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621528 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.622907 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-client-ca\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.623121 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-config\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.624746 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-proxy-ca-bundles\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.625555 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d81b726-5276-4f17-aee8-ef3ec176c910-serving-cert\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.636663 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j96l6\" (UniqueName: \"kubernetes.io/projected/3d81b726-5276-4f17-aee8-ef3ec176c910-kube-api-access-j96l6\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.735837 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.157116 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f869b48f5-5s42m"] Jan 24 06:59:02 crc kubenswrapper[4675]: W0124 06:59:02.162310 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d81b726_5276_4f17_aee8_ef3ec176c910.slice/crio-81a9b0f2ebc561ebe726b6829df4d7d344006c3b6540b517765ede2d8fe18cda WatchSource:0}: Error finding container 81a9b0f2ebc561ebe726b6829df4d7d344006c3b6540b517765ede2d8fe18cda: Status 404 returned error can't find the container with id 81a9b0f2ebc561ebe726b6829df4d7d344006c3b6540b517765ede2d8fe18cda Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.204632 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" event={"ID":"3d81b726-5276-4f17-aee8-ef3ec176c910","Type":"ContainerStarted","Data":"81a9b0f2ebc561ebe726b6829df4d7d344006c3b6540b517765ede2d8fe18cda"} Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.205868 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" event={"ID":"9a2ff8b9-89c4-4f23-863e-45f020ace61d","Type":"ContainerDied","Data":"96e565553cb619087fdc5ef912bf40674e76c71df4883bea6af8efc4c2e49f0f"} Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.205903 4675 scope.go:117] "RemoveContainer" containerID="26fcfb0c650c6cecc42e1a08207ec4eefa8fb176b005855ed7ecb45aa844e6f6" Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.206020 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.240943 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d69464474-s4wsb"] Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.245016 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d69464474-s4wsb"] Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.948182 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a2ff8b9-89c4-4f23-863e-45f020ace61d" path="/var/lib/kubelet/pods/9a2ff8b9-89c4-4f23-863e-45f020ace61d/volumes" Jan 24 06:59:03 crc kubenswrapper[4675]: I0124 06:59:03.212906 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" event={"ID":"3d81b726-5276-4f17-aee8-ef3ec176c910","Type":"ContainerStarted","Data":"a81f66b2038bb77432f712cea1f7945a7f39d0accacf48f957a8329c1ac75db6"} Jan 24 06:59:03 crc kubenswrapper[4675]: I0124 06:59:03.214213 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:03 crc kubenswrapper[4675]: I0124 06:59:03.219658 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:03 crc kubenswrapper[4675]: I0124 06:59:03.237201 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" podStartSLOduration=3.237179841 podStartE2EDuration="3.237179841s" podCreationTimestamp="2026-01-24 06:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:59:03.230659714 +0000 UTC m=+344.526764937" watchObservedRunningTime="2026-01-24 06:59:03.237179841 +0000 UTC m=+344.533285064" Jan 24 06:59:08 crc kubenswrapper[4675]: I0124 06:59:08.629996 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 06:59:08 crc kubenswrapper[4675]: I0124 06:59:08.630434 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 06:59:17 crc kubenswrapper[4675]: I0124 06:59:17.976336 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l7z59"] Jan 24 06:59:17 crc kubenswrapper[4675]: I0124 06:59:17.977233 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l7z59" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="registry-server" containerID="cri-o://f7da7a6ea16001ac283a93fad40c7a51c75e4a7c85df4e2f006edf1afdc05e6b" gracePeriod=30 Jan 24 06:59:17 crc kubenswrapper[4675]: I0124 06:59:17.988338 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmxj8"] Jan 24 06:59:17 crc kubenswrapper[4675]: I0124 06:59:17.990118 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gmxj8" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="registry-server" containerID="cri-o://ad43223b1b4489aeea4bb97c6915fdb5cd57e53e431523d75be8478f75c6178f" gracePeriod=30 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.003026 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgv9v"] Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.003516 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" containerID="cri-o://eab0bc055c4be21ea7dee6f7dc7e94d0bda87b2e1b4295b18b3ab5807bb0774b" gracePeriod=30 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.016524 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f482d"] Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.016897 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f482d" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="registry-server" containerID="cri-o://c15fa43487111e1d485b4196d8624a7f782747a8f0a151642273c5900aacb11c" gracePeriod=30 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.027760 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6vjtj"] Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.027986 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6vjtj" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="registry-server" containerID="cri-o://a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376" gracePeriod=30 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.032315 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9cx7r"] Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.033096 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.050914 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9cx7r"] Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.139106 4675 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgv9v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.139299 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.150032 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk8kp\" (UniqueName: \"kubernetes.io/projected/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-kube-api-access-hk8kp\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.150095 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.150155 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.251044 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.251106 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk8kp\" (UniqueName: \"kubernetes.io/projected/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-kube-api-access-hk8kp\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.251140 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.252587 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.257006 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.271398 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk8kp\" (UniqueName: \"kubernetes.io/projected/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-kube-api-access-hk8kp\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.314390 4675 generic.go:334] "Generic (PLEG): container finished" podID="1165063b-e2f9-406a-86c7-0559c419d043" containerID="f7da7a6ea16001ac283a93fad40c7a51c75e4a7c85df4e2f006edf1afdc05e6b" exitCode=0 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.314447 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7z59" event={"ID":"1165063b-e2f9-406a-86c7-0559c419d043","Type":"ContainerDied","Data":"f7da7a6ea16001ac283a93fad40c7a51c75e4a7c85df4e2f006edf1afdc05e6b"} Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.316261 4675 generic.go:334] "Generic (PLEG): container finished" podID="26a336bf-741a-462c-bafd-9ff5e4838956" containerID="a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376" exitCode=0 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.316322 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerDied","Data":"a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376"} Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.318103 4675 generic.go:334] "Generic (PLEG): container finished" podID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerID="ad43223b1b4489aeea4bb97c6915fdb5cd57e53e431523d75be8478f75c6178f" exitCode=0 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.318150 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerDied","Data":"ad43223b1b4489aeea4bb97c6915fdb5cd57e53e431523d75be8478f75c6178f"} Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.319790 4675 generic.go:334] "Generic (PLEG): container finished" podID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerID="eab0bc055c4be21ea7dee6f7dc7e94d0bda87b2e1b4295b18b3ab5807bb0774b" exitCode=0 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.319829 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" event={"ID":"b1a4e6f5-492a-4b32-aa94-c8eca20b0067","Type":"ContainerDied","Data":"eab0bc055c4be21ea7dee6f7dc7e94d0bda87b2e1b4295b18b3ab5807bb0774b"} Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.322070 4675 generic.go:334] "Generic (PLEG): container finished" podID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerID="c15fa43487111e1d485b4196d8624a7f782747a8f0a151642273c5900aacb11c" exitCode=0 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.322192 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f482d" event={"ID":"0eebacf7-e6c0-4fad-a868-ed067f1b1acc","Type":"ContainerDied","Data":"c15fa43487111e1d485b4196d8624a7f782747a8f0a151642273c5900aacb11c"} Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.348212 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.730919 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9cx7r"] Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.975371 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.083206 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm4jp\" (UniqueName: \"kubernetes.io/projected/1165063b-e2f9-406a-86c7-0559c419d043-kube-api-access-rm4jp\") pod \"1165063b-e2f9-406a-86c7-0559c419d043\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.099964 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-utilities\") pod \"1165063b-e2f9-406a-86c7-0559c419d043\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.100008 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-catalog-content\") pod \"1165063b-e2f9-406a-86c7-0559c419d043\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.106201 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-utilities" (OuterVolumeSpecName: "utilities") pod "1165063b-e2f9-406a-86c7-0559c419d043" (UID: "1165063b-e2f9-406a-86c7-0559c419d043"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.114368 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1165063b-e2f9-406a-86c7-0559c419d043-kube-api-access-rm4jp" (OuterVolumeSpecName: "kube-api-access-rm4jp") pod "1165063b-e2f9-406a-86c7-0559c419d043" (UID: "1165063b-e2f9-406a-86c7-0559c419d043"). InnerVolumeSpecName "kube-api-access-rm4jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.159087 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1165063b-e2f9-406a-86c7-0559c419d043" (UID: "1165063b-e2f9-406a-86c7-0559c419d043"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.204574 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.204612 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.204628 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm4jp\" (UniqueName: \"kubernetes.io/projected/1165063b-e2f9-406a-86c7-0559c419d043-kube-api-access-rm4jp\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.265040 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.305690 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-utilities\") pod \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.305756 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-catalog-content\") pod \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.305878 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4zq9\" (UniqueName: \"kubernetes.io/projected/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-kube-api-access-s4zq9\") pod \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.308922 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-utilities" (OuterVolumeSpecName: "utilities") pod "0eebacf7-e6c0-4fad-a868-ed067f1b1acc" (UID: "0eebacf7-e6c0-4fad-a868-ed067f1b1acc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.309003 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-kube-api-access-s4zq9" (OuterVolumeSpecName: "kube-api-access-s4zq9") pod "0eebacf7-e6c0-4fad-a868-ed067f1b1acc" (UID: "0eebacf7-e6c0-4fad-a868-ed067f1b1acc"). InnerVolumeSpecName "kube-api-access-s4zq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: E0124 06:59:19.316864 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376 is running failed: container process not found" containerID="a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376" cmd=["grpc_health_probe","-addr=:50051"] Jan 24 06:59:19 crc kubenswrapper[4675]: E0124 06:59:19.317801 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376 is running failed: container process not found" containerID="a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376" cmd=["grpc_health_probe","-addr=:50051"] Jan 24 06:59:19 crc kubenswrapper[4675]: E0124 06:59:19.318025 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376 is running failed: container process not found" containerID="a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376" cmd=["grpc_health_probe","-addr=:50051"] Jan 24 06:59:19 crc kubenswrapper[4675]: E0124 06:59:19.318053 4675 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-6vjtj" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="registry-server" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.328876 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.338804 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerDied","Data":"ee8ef93d6dbda9d79ddf1313f70a0d90a2db3cc78f034c04f57e14a671da3bf7"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.338883 4675 scope.go:117] "RemoveContainer" containerID="ad43223b1b4489aeea4bb97c6915fdb5cd57e53e431523d75be8478f75c6178f" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.347036 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0eebacf7-e6c0-4fad-a868-ed067f1b1acc" (UID: "0eebacf7-e6c0-4fad-a868-ed067f1b1acc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.352099 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.360359 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f482d" event={"ID":"0eebacf7-e6c0-4fad-a868-ed067f1b1acc","Type":"ContainerDied","Data":"236fdbdb18d6c63d9e15929a4a294f390be7b67da4280698a6623d3338464d82"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.360512 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.363113 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.367678 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerDied","Data":"17da6fe8ca04e09fc66d1d33d6a6d431e601614c667f17a5807f9476665435d9"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.367826 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.379997 4675 scope.go:117] "RemoveContainer" containerID="b18e9709b62b8f7ea17174ebecf1128ccaae80aa9075eae9177465b80767c745" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.386668 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7z59" event={"ID":"1165063b-e2f9-406a-86c7-0559c419d043","Type":"ContainerDied","Data":"207126b350e6a988e2c0611799f1606a299f405d91cfae55c96cd51fac72006a"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.387038 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.390584 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.390790 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" event={"ID":"b1a4e6f5-492a-4b32-aa94-c8eca20b0067","Type":"ContainerDied","Data":"4183cc63d47ed05819d502c422e1c423e9c066190ca15b760cb785c93f9da8c8"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.392153 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" event={"ID":"83c80cb7-74c3-417a-8d8e-54cdcf640b5b","Type":"ContainerStarted","Data":"7315a16df384d876eeaa622c2c21e57a267d18e8770a0de409178f9ba53b2f51"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.392183 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" event={"ID":"83c80cb7-74c3-417a-8d8e-54cdcf640b5b","Type":"ContainerStarted","Data":"37afc4e8844756788f5eacf1da01b54d629e0fb7323e672007240e8259f6fb25"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.393195 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.401225 4675 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9cx7r container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" start-of-body= Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.401284 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" podUID="83c80cb7-74c3-417a-8d8e-54cdcf640b5b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407500 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw26p\" (UniqueName: \"kubernetes.io/projected/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-kube-api-access-qw26p\") pod \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407555 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-utilities\") pod \"26a336bf-741a-462c-bafd-9ff5e4838956\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407587 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-trusted-ca\") pod \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407617 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-catalog-content\") pod \"58002d63-9bc7-4470-a2ae-9be6e2828136\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407651 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfwmj\" (UniqueName: \"kubernetes.io/projected/26a336bf-741a-462c-bafd-9ff5e4838956-kube-api-access-wfwmj\") pod \"26a336bf-741a-462c-bafd-9ff5e4838956\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407697 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-utilities\") pod \"58002d63-9bc7-4470-a2ae-9be6e2828136\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407752 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-catalog-content\") pod \"26a336bf-741a-462c-bafd-9ff5e4838956\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407798 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-operator-metrics\") pod \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407851 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szrw2\" (UniqueName: \"kubernetes.io/projected/58002d63-9bc7-4470-a2ae-9be6e2828136-kube-api-access-szrw2\") pod \"58002d63-9bc7-4470-a2ae-9be6e2828136\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.408087 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4zq9\" (UniqueName: \"kubernetes.io/projected/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-kube-api-access-s4zq9\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.408107 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.408120 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.410982 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-utilities" (OuterVolumeSpecName: "utilities") pod "26a336bf-741a-462c-bafd-9ff5e4838956" (UID: "26a336bf-741a-462c-bafd-9ff5e4838956"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.411082 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b1a4e6f5-492a-4b32-aa94-c8eca20b0067" (UID: "b1a4e6f5-492a-4b32-aa94-c8eca20b0067"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.412857 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-utilities" (OuterVolumeSpecName: "utilities") pod "58002d63-9bc7-4470-a2ae-9be6e2828136" (UID: "58002d63-9bc7-4470-a2ae-9be6e2828136"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.413704 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58002d63-9bc7-4470-a2ae-9be6e2828136-kube-api-access-szrw2" (OuterVolumeSpecName: "kube-api-access-szrw2") pod "58002d63-9bc7-4470-a2ae-9be6e2828136" (UID: "58002d63-9bc7-4470-a2ae-9be6e2828136"). InnerVolumeSpecName "kube-api-access-szrw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.415968 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a336bf-741a-462c-bafd-9ff5e4838956-kube-api-access-wfwmj" (OuterVolumeSpecName: "kube-api-access-wfwmj") pod "26a336bf-741a-462c-bafd-9ff5e4838956" (UID: "26a336bf-741a-462c-bafd-9ff5e4838956"). InnerVolumeSpecName "kube-api-access-wfwmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.417295 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b1a4e6f5-492a-4b32-aa94-c8eca20b0067" (UID: "b1a4e6f5-492a-4b32-aa94-c8eca20b0067"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.418907 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-kube-api-access-qw26p" (OuterVolumeSpecName: "kube-api-access-qw26p") pod "b1a4e6f5-492a-4b32-aa94-c8eca20b0067" (UID: "b1a4e6f5-492a-4b32-aa94-c8eca20b0067"). InnerVolumeSpecName "kube-api-access-qw26p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.418318 4675 scope.go:117] "RemoveContainer" containerID="27e8eea471043c3df40a37d217289a9d5547edf9d7cc2d893fdce5d2d206a098" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.456013 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f482d"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.463232 4675 scope.go:117] "RemoveContainer" containerID="c15fa43487111e1d485b4196d8624a7f782747a8f0a151642273c5900aacb11c" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.468824 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f482d"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.471144 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" podStartSLOduration=1.471128373 podStartE2EDuration="1.471128373s" podCreationTimestamp="2026-01-24 06:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:59:19.439255551 +0000 UTC m=+360.735360774" watchObservedRunningTime="2026-01-24 06:59:19.471128373 +0000 UTC m=+360.767233596" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.478117 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l7z59"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.483516 4675 scope.go:117] "RemoveContainer" containerID="acc5dc0c07c3a0b5401b6f9bc7ce29ec56cf35e994494b4063328ab3e6990f50" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.484262 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l7z59"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.492108 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58002d63-9bc7-4470-a2ae-9be6e2828136" (UID: "58002d63-9bc7-4470-a2ae-9be6e2828136"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.500712 4675 scope.go:117] "RemoveContainer" containerID="eb71624ab1714e3b868179ef8715f76bbb985e9ee1eb32ef5ea46430a5377ae3" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508893 4675 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508917 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szrw2\" (UniqueName: \"kubernetes.io/projected/58002d63-9bc7-4470-a2ae-9be6e2828136-kube-api-access-szrw2\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508926 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw26p\" (UniqueName: \"kubernetes.io/projected/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-kube-api-access-qw26p\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508935 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508943 4675 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508951 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508959 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfwmj\" (UniqueName: \"kubernetes.io/projected/26a336bf-741a-462c-bafd-9ff5e4838956-kube-api-access-wfwmj\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508969 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.519683 4675 scope.go:117] "RemoveContainer" containerID="a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.534599 4675 scope.go:117] "RemoveContainer" containerID="1e6ff790a8ef2685983150316ed55a0d1390d5076678d43aeeb0f36eeb83ccdd" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.557027 4675 scope.go:117] "RemoveContainer" containerID="950ce170714980389fc4fdc60fb6c50ac2d025bc7af1f23de6767552eb91501f" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.573628 4675 scope.go:117] "RemoveContainer" containerID="f7da7a6ea16001ac283a93fad40c7a51c75e4a7c85df4e2f006edf1afdc05e6b" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.574607 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26a336bf-741a-462c-bafd-9ff5e4838956" (UID: "26a336bf-741a-462c-bafd-9ff5e4838956"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.584754 4675 scope.go:117] "RemoveContainer" containerID="42b8ee55bd339ab55f41df3ff58f52b52b0d7e8bb773f48fda829b8f6ab4ed80" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.601553 4675 scope.go:117] "RemoveContainer" containerID="59eb245fda115973b3f277ca4c5731837caa16e3bd2b40daf6b31eeaebc1bf72" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.610005 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.613888 4675 scope.go:117] "RemoveContainer" containerID="eab0bc055c4be21ea7dee6f7dc7e94d0bda87b2e1b4295b18b3ab5807bb0774b" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.690916 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6vjtj"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.694148 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6vjtj"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.714678 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgv9v"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.721817 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgv9v"] Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191052 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qrkr2"] Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191287 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191302 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191317 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191325 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191336 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191345 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191357 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191365 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191380 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191390 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191407 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191417 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191427 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191434 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191444 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191452 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191463 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191470 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191481 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191490 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191503 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191510 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191520 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191527 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191542 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191552 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191701 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191964 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191985 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191999 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.192098 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.193124 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.198840 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.203733 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrkr2"] Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.220611 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4b49920-8f11-4ffb-84f0-930d921f722d-utilities\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.220959 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mppcx\" (UniqueName: \"kubernetes.io/projected/b4b49920-8f11-4ffb-84f0-930d921f722d-kube-api-access-mppcx\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.221107 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4b49920-8f11-4ffb-84f0-930d921f722d-catalog-content\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.322697 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mppcx\" (UniqueName: \"kubernetes.io/projected/b4b49920-8f11-4ffb-84f0-930d921f722d-kube-api-access-mppcx\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.323414 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4b49920-8f11-4ffb-84f0-930d921f722d-catalog-content\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.323925 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4b49920-8f11-4ffb-84f0-930d921f722d-utilities\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.323881 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4b49920-8f11-4ffb-84f0-930d921f722d-catalog-content\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.324280 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4b49920-8f11-4ffb-84f0-930d921f722d-utilities\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.349311 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mppcx\" (UniqueName: \"kubernetes.io/projected/b4b49920-8f11-4ffb-84f0-930d921f722d-kube-api-access-mppcx\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.385597 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2zdff"] Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.386510 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.389335 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.396060 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zdff"] Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.401888 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.414529 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.425376 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96e2d7dc-bba1-4021-a095-98a4feb924da-catalog-content\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.425429 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96e2d7dc-bba1-4021-a095-98a4feb924da-utilities\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.425500 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xgjc\" (UniqueName: \"kubernetes.io/projected/96e2d7dc-bba1-4021-a095-98a4feb924da-kube-api-access-8xgjc\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.450741 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmxj8"] Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.456792 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gmxj8"] Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.517010 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.526154 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96e2d7dc-bba1-4021-a095-98a4feb924da-catalog-content\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.526696 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96e2d7dc-bba1-4021-a095-98a4feb924da-utilities\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.526632 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96e2d7dc-bba1-4021-a095-98a4feb924da-catalog-content\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.526767 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xgjc\" (UniqueName: \"kubernetes.io/projected/96e2d7dc-bba1-4021-a095-98a4feb924da-kube-api-access-8xgjc\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.527294 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96e2d7dc-bba1-4021-a095-98a4feb924da-utilities\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.550238 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xgjc\" (UniqueName: \"kubernetes.io/projected/96e2d7dc-bba1-4021-a095-98a4feb924da-kube-api-access-8xgjc\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.704760 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.917498 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrkr2"] Jan 24 06:59:20 crc kubenswrapper[4675]: W0124 06:59:20.920147 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4b49920_8f11_4ffb_84f0_930d921f722d.slice/crio-50100a21e450486bacc56aebbf06c49d06db9fba02ca317289b3ffbbe0b37259 WatchSource:0}: Error finding container 50100a21e450486bacc56aebbf06c49d06db9fba02ca317289b3ffbbe0b37259: Status 404 returned error can't find the container with id 50100a21e450486bacc56aebbf06c49d06db9fba02ca317289b3ffbbe0b37259 Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.948398 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" path="/var/lib/kubelet/pods/0eebacf7-e6c0-4fad-a868-ed067f1b1acc/volumes" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.949180 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1165063b-e2f9-406a-86c7-0559c419d043" path="/var/lib/kubelet/pods/1165063b-e2f9-406a-86c7-0559c419d043/volumes" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.949729 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" path="/var/lib/kubelet/pods/26a336bf-741a-462c-bafd-9ff5e4838956/volumes" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.950779 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" path="/var/lib/kubelet/pods/58002d63-9bc7-4470-a2ae-9be6e2828136/volumes" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.951336 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" path="/var/lib/kubelet/pods/b1a4e6f5-492a-4b32-aa94-c8eca20b0067/volumes" Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.094134 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zdff"] Jan 24 06:59:21 crc kubenswrapper[4675]: W0124 06:59:21.116024 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96e2d7dc_bba1_4021_a095_98a4feb924da.slice/crio-4010b2a17b83113f70b394d20780ab8b7bcd196b96a51d83e4992b0990a8fc4c WatchSource:0}: Error finding container 4010b2a17b83113f70b394d20780ab8b7bcd196b96a51d83e4992b0990a8fc4c: Status 404 returned error can't find the container with id 4010b2a17b83113f70b394d20780ab8b7bcd196b96a51d83e4992b0990a8fc4c Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.415829 4675 generic.go:334] "Generic (PLEG): container finished" podID="b4b49920-8f11-4ffb-84f0-930d921f722d" containerID="ccaddfa5705fe148ac5795014f603d990719078f6b694bc09be4c33394a83f93" exitCode=0 Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.415888 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrkr2" event={"ID":"b4b49920-8f11-4ffb-84f0-930d921f722d","Type":"ContainerDied","Data":"ccaddfa5705fe148ac5795014f603d990719078f6b694bc09be4c33394a83f93"} Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.415910 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrkr2" event={"ID":"b4b49920-8f11-4ffb-84f0-930d921f722d","Type":"ContainerStarted","Data":"50100a21e450486bacc56aebbf06c49d06db9fba02ca317289b3ffbbe0b37259"} Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.420225 4675 generic.go:334] "Generic (PLEG): container finished" podID="96e2d7dc-bba1-4021-a095-98a4feb924da" containerID="5b167e690acee760ac42a4a289df373cf19e52976144fa4be335c52e57eaa6ad" exitCode=0 Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.420741 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zdff" event={"ID":"96e2d7dc-bba1-4021-a095-98a4feb924da","Type":"ContainerDied","Data":"5b167e690acee760ac42a4a289df373cf19e52976144fa4be335c52e57eaa6ad"} Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.420782 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zdff" event={"ID":"96e2d7dc-bba1-4021-a095-98a4feb924da","Type":"ContainerStarted","Data":"4010b2a17b83113f70b394d20780ab8b7bcd196b96a51d83e4992b0990a8fc4c"} Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.431938 4675 generic.go:334] "Generic (PLEG): container finished" podID="b4b49920-8f11-4ffb-84f0-930d921f722d" containerID="65b02291eb6da3105f095eba4028ddbb473b0c6f2e8e52de9ef85c9f4e5201b5" exitCode=0 Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.432116 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrkr2" event={"ID":"b4b49920-8f11-4ffb-84f0-930d921f722d","Type":"ContainerDied","Data":"65b02291eb6da3105f095eba4028ddbb473b0c6f2e8e52de9ef85c9f4e5201b5"} Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.440684 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zdff" event={"ID":"96e2d7dc-bba1-4021-a095-98a4feb924da","Type":"ContainerStarted","Data":"a985913cd2b7a8f1a29a5a0e2ede47d567f18907cdc04794b06e93451d2719e8"} Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.593540 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bsdlx"] Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.600339 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.602847 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.613749 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsdlx"] Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.653249 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74192ba-e384-473f-8b1f-5acf16fcf6cb-utilities\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.653312 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkkj\" (UniqueName: \"kubernetes.io/projected/c74192ba-e384-473f-8b1f-5acf16fcf6cb-kube-api-access-njkkj\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.653352 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74192ba-e384-473f-8b1f-5acf16fcf6cb-catalog-content\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.755009 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74192ba-e384-473f-8b1f-5acf16fcf6cb-utilities\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.755064 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njkkj\" (UniqueName: \"kubernetes.io/projected/c74192ba-e384-473f-8b1f-5acf16fcf6cb-kube-api-access-njkkj\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.755095 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74192ba-e384-473f-8b1f-5acf16fcf6cb-catalog-content\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.755649 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74192ba-e384-473f-8b1f-5acf16fcf6cb-utilities\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.755999 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74192ba-e384-473f-8b1f-5acf16fcf6cb-catalog-content\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.775522 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njkkj\" (UniqueName: \"kubernetes.io/projected/c74192ba-e384-473f-8b1f-5acf16fcf6cb-kube-api-access-njkkj\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.791620 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-25b5x"] Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.796843 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.800203 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.806902 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25b5x"] Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.856860 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82ba4e7-d34e-49ce-a0fa-628261617832-catalog-content\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.857241 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82ba4e7-d34e-49ce-a0fa-628261617832-utilities\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.857267 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85t57\" (UniqueName: \"kubernetes.io/projected/c82ba4e7-d34e-49ce-a0fa-628261617832-kube-api-access-85t57\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.925427 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.958909 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82ba4e7-d34e-49ce-a0fa-628261617832-catalog-content\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.959375 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82ba4e7-d34e-49ce-a0fa-628261617832-utilities\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.959550 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85t57\" (UniqueName: \"kubernetes.io/projected/c82ba4e7-d34e-49ce-a0fa-628261617832-kube-api-access-85t57\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.959482 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82ba4e7-d34e-49ce-a0fa-628261617832-catalog-content\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.959918 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82ba4e7-d34e-49ce-a0fa-628261617832-utilities\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.977814 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85t57\" (UniqueName: \"kubernetes.io/projected/c82ba4e7-d34e-49ce-a0fa-628261617832-kube-api-access-85t57\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.129546 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.378991 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsdlx"] Jan 24 06:59:23 crc kubenswrapper[4675]: W0124 06:59:23.387151 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc74192ba_e384_473f_8b1f_5acf16fcf6cb.slice/crio-7861865b9226f33984c76968633172a013ddd79ce08e8f2362590c6e5c3c9a70 WatchSource:0}: Error finding container 7861865b9226f33984c76968633172a013ddd79ce08e8f2362590c6e5c3c9a70: Status 404 returned error can't find the container with id 7861865b9226f33984c76968633172a013ddd79ce08e8f2362590c6e5c3c9a70 Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.460828 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrkr2" event={"ID":"b4b49920-8f11-4ffb-84f0-930d921f722d","Type":"ContainerStarted","Data":"2263357c7498bfc162aea56406a3919aaa1fcfc6868f72fe6336bc2e318074e9"} Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.468409 4675 generic.go:334] "Generic (PLEG): container finished" podID="96e2d7dc-bba1-4021-a095-98a4feb924da" containerID="a985913cd2b7a8f1a29a5a0e2ede47d567f18907cdc04794b06e93451d2719e8" exitCode=0 Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.468696 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zdff" event={"ID":"96e2d7dc-bba1-4021-a095-98a4feb924da","Type":"ContainerDied","Data":"a985913cd2b7a8f1a29a5a0e2ede47d567f18907cdc04794b06e93451d2719e8"} Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.473176 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsdlx" event={"ID":"c74192ba-e384-473f-8b1f-5acf16fcf6cb","Type":"ContainerStarted","Data":"7861865b9226f33984c76968633172a013ddd79ce08e8f2362590c6e5c3c9a70"} Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.491643 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qrkr2" podStartSLOduration=2.04499028 podStartE2EDuration="3.491625556s" podCreationTimestamp="2026-01-24 06:59:20 +0000 UTC" firstStartedPulling="2026-01-24 06:59:21.417510428 +0000 UTC m=+362.713615651" lastFinishedPulling="2026-01-24 06:59:22.864145704 +0000 UTC m=+364.160250927" observedRunningTime="2026-01-24 06:59:23.491248877 +0000 UTC m=+364.787354100" watchObservedRunningTime="2026-01-24 06:59:23.491625556 +0000 UTC m=+364.787730779" Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.533469 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25b5x"] Jan 24 06:59:23 crc kubenswrapper[4675]: W0124 06:59:23.541219 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc82ba4e7_d34e_49ce_a0fa_628261617832.slice/crio-ca630d43e07b0a5f905d0f3d849801994dec3d41d00c4d0dbd4dc07f5a59b65b WatchSource:0}: Error finding container ca630d43e07b0a5f905d0f3d849801994dec3d41d00c4d0dbd4dc07f5a59b65b: Status 404 returned error can't find the container with id ca630d43e07b0a5f905d0f3d849801994dec3d41d00c4d0dbd4dc07f5a59b65b Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.480261 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zdff" event={"ID":"96e2d7dc-bba1-4021-a095-98a4feb924da","Type":"ContainerStarted","Data":"a82eee3b291fcb3b1be2512271cfa6fc559bb6b1a9afb3bfba33c3cea8d6b40a"} Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.482842 4675 generic.go:334] "Generic (PLEG): container finished" podID="c82ba4e7-d34e-49ce-a0fa-628261617832" containerID="7af8a994af912e2c9f4b56adfaeac2a237c39b125b2dbf596993d850e495bc5a" exitCode=0 Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.482920 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25b5x" event={"ID":"c82ba4e7-d34e-49ce-a0fa-628261617832","Type":"ContainerDied","Data":"7af8a994af912e2c9f4b56adfaeac2a237c39b125b2dbf596993d850e495bc5a"} Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.482955 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25b5x" event={"ID":"c82ba4e7-d34e-49ce-a0fa-628261617832","Type":"ContainerStarted","Data":"ca630d43e07b0a5f905d0f3d849801994dec3d41d00c4d0dbd4dc07f5a59b65b"} Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.484256 4675 generic.go:334] "Generic (PLEG): container finished" podID="c74192ba-e384-473f-8b1f-5acf16fcf6cb" containerID="a3db1b5fb74942c7678ba08dbe353b4ae8805d6a490dee7ba309e6668bcf1ae8" exitCode=0 Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.484352 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsdlx" event={"ID":"c74192ba-e384-473f-8b1f-5acf16fcf6cb","Type":"ContainerDied","Data":"a3db1b5fb74942c7678ba08dbe353b4ae8805d6a490dee7ba309e6668bcf1ae8"} Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.528616 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2zdff" podStartSLOduration=1.869262425 podStartE2EDuration="4.528598923s" podCreationTimestamp="2026-01-24 06:59:20 +0000 UTC" firstStartedPulling="2026-01-24 06:59:21.421159576 +0000 UTC m=+362.717264789" lastFinishedPulling="2026-01-24 06:59:24.080496064 +0000 UTC m=+365.376601287" observedRunningTime="2026-01-24 06:59:24.506620631 +0000 UTC m=+365.802725854" watchObservedRunningTime="2026-01-24 06:59:24.528598923 +0000 UTC m=+365.824704146" Jan 24 06:59:25 crc kubenswrapper[4675]: I0124 06:59:25.490706 4675 generic.go:334] "Generic (PLEG): container finished" podID="c82ba4e7-d34e-49ce-a0fa-628261617832" containerID="b04406b1c9b8c3f7c73fece492d531ae08bf545b760278cf923d81fafb1190bd" exitCode=0 Jan 24 06:59:25 crc kubenswrapper[4675]: I0124 06:59:25.490751 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25b5x" event={"ID":"c82ba4e7-d34e-49ce-a0fa-628261617832","Type":"ContainerDied","Data":"b04406b1c9b8c3f7c73fece492d531ae08bf545b760278cf923d81fafb1190bd"} Jan 24 06:59:25 crc kubenswrapper[4675]: I0124 06:59:25.494491 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsdlx" event={"ID":"c74192ba-e384-473f-8b1f-5acf16fcf6cb","Type":"ContainerStarted","Data":"49bf9b5a0c96a7a5d3c567d0fd076ea25dccae51e74d4e5be91306c8eab73923"} Jan 24 06:59:26 crc kubenswrapper[4675]: I0124 06:59:26.501461 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25b5x" event={"ID":"c82ba4e7-d34e-49ce-a0fa-628261617832","Type":"ContainerStarted","Data":"ecd9551db7e81d40febc0ce99d00250b428edf061db986fa7f7141ef1f9d7ea9"} Jan 24 06:59:26 crc kubenswrapper[4675]: I0124 06:59:26.503047 4675 generic.go:334] "Generic (PLEG): container finished" podID="c74192ba-e384-473f-8b1f-5acf16fcf6cb" containerID="49bf9b5a0c96a7a5d3c567d0fd076ea25dccae51e74d4e5be91306c8eab73923" exitCode=0 Jan 24 06:59:26 crc kubenswrapper[4675]: I0124 06:59:26.503098 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsdlx" event={"ID":"c74192ba-e384-473f-8b1f-5acf16fcf6cb","Type":"ContainerDied","Data":"49bf9b5a0c96a7a5d3c567d0fd076ea25dccae51e74d4e5be91306c8eab73923"} Jan 24 06:59:26 crc kubenswrapper[4675]: I0124 06:59:26.525929 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-25b5x" podStartSLOduration=3.110505402 podStartE2EDuration="4.525908121s" podCreationTimestamp="2026-01-24 06:59:22 +0000 UTC" firstStartedPulling="2026-01-24 06:59:24.484450434 +0000 UTC m=+365.780555657" lastFinishedPulling="2026-01-24 06:59:25.899853153 +0000 UTC m=+367.195958376" observedRunningTime="2026-01-24 06:59:26.518996063 +0000 UTC m=+367.815101286" watchObservedRunningTime="2026-01-24 06:59:26.525908121 +0000 UTC m=+367.822013344" Jan 24 06:59:27 crc kubenswrapper[4675]: I0124 06:59:27.510367 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsdlx" event={"ID":"c74192ba-e384-473f-8b1f-5acf16fcf6cb","Type":"ContainerStarted","Data":"f41cbc17f47cd7d58a62f3200b3f735b68f52664142d072cf0a64ca1459891b4"} Jan 24 06:59:27 crc kubenswrapper[4675]: I0124 06:59:27.536555 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bsdlx" podStartSLOduration=3.107682034 podStartE2EDuration="5.53653303s" podCreationTimestamp="2026-01-24 06:59:22 +0000 UTC" firstStartedPulling="2026-01-24 06:59:24.485486769 +0000 UTC m=+365.781591992" lastFinishedPulling="2026-01-24 06:59:26.914337755 +0000 UTC m=+368.210442988" observedRunningTime="2026-01-24 06:59:27.532608674 +0000 UTC m=+368.828713897" watchObservedRunningTime="2026-01-24 06:59:27.53653303 +0000 UTC m=+368.832638253" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.518274 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.519683 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.576198 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.619786 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.705793 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.705839 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.751936 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:31 crc kubenswrapper[4675]: I0124 06:59:31.578087 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:32 crc kubenswrapper[4675]: I0124 06:59:32.926966 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:32 crc kubenswrapper[4675]: I0124 06:59:32.927223 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:32 crc kubenswrapper[4675]: I0124 06:59:32.963922 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:33 crc kubenswrapper[4675]: I0124 06:59:33.130401 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:33 crc kubenswrapper[4675]: I0124 06:59:33.130852 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:33 crc kubenswrapper[4675]: I0124 06:59:33.168831 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:33 crc kubenswrapper[4675]: I0124 06:59:33.577968 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:33 crc kubenswrapper[4675]: I0124 06:59:33.578030 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:38 crc kubenswrapper[4675]: I0124 06:59:38.629801 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 06:59:38 crc kubenswrapper[4675]: I0124 06:59:38.630303 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.191344 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh"] Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.192589 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.195540 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.197554 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.209389 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh"] Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.375947 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5mcv\" (UniqueName: \"kubernetes.io/projected/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-kube-api-access-k5mcv\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.376035 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-config-volume\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.376188 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-secret-volume\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.478249 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-secret-volume\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.478342 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5mcv\" (UniqueName: \"kubernetes.io/projected/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-kube-api-access-k5mcv\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.478386 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-config-volume\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.480352 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-config-volume\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.488569 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-secret-volume\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.501887 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5mcv\" (UniqueName: \"kubernetes.io/projected/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-kube-api-access-k5mcv\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.511317 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:01 crc kubenswrapper[4675]: I0124 07:00:01.126640 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh"] Jan 24 07:00:01 crc kubenswrapper[4675]: I0124 07:00:01.697391 4675 generic.go:334] "Generic (PLEG): container finished" podID="df2777ca-be51-4dc3-b7da-d84bd7ca16c4" containerID="2ee1de4c569b0dfae84a9127d5e07bf0bf62a91389eaf5b8b6361fce4ef2d02f" exitCode=0 Jan 24 07:00:01 crc kubenswrapper[4675]: I0124 07:00:01.697433 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" event={"ID":"df2777ca-be51-4dc3-b7da-d84bd7ca16c4","Type":"ContainerDied","Data":"2ee1de4c569b0dfae84a9127d5e07bf0bf62a91389eaf5b8b6361fce4ef2d02f"} Jan 24 07:00:01 crc kubenswrapper[4675]: I0124 07:00:01.697455 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" event={"ID":"df2777ca-be51-4dc3-b7da-d84bd7ca16c4","Type":"ContainerStarted","Data":"034e1e25b01d6f9441e5a8f075c8606c7081c4408951432813c333585de01e88"} Jan 24 07:00:02 crc kubenswrapper[4675]: I0124 07:00:02.968155 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.012238 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-config-volume\") pod \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.012393 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-secret-volume\") pod \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.012434 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5mcv\" (UniqueName: \"kubernetes.io/projected/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-kube-api-access-k5mcv\") pod \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.013125 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-config-volume" (OuterVolumeSpecName: "config-volume") pod "df2777ca-be51-4dc3-b7da-d84bd7ca16c4" (UID: "df2777ca-be51-4dc3-b7da-d84bd7ca16c4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.017076 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df2777ca-be51-4dc3-b7da-d84bd7ca16c4" (UID: "df2777ca-be51-4dc3-b7da-d84bd7ca16c4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.019883 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-kube-api-access-k5mcv" (OuterVolumeSpecName: "kube-api-access-k5mcv") pod "df2777ca-be51-4dc3-b7da-d84bd7ca16c4" (UID: "df2777ca-be51-4dc3-b7da-d84bd7ca16c4"). InnerVolumeSpecName "kube-api-access-k5mcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.113703 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.113781 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5mcv\" (UniqueName: \"kubernetes.io/projected/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-kube-api-access-k5mcv\") on node \"crc\" DevicePath \"\"" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.113799 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.711818 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" event={"ID":"df2777ca-be51-4dc3-b7da-d84bd7ca16c4","Type":"ContainerDied","Data":"034e1e25b01d6f9441e5a8f075c8606c7081c4408951432813c333585de01e88"} Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.711871 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="034e1e25b01d6f9441e5a8f075c8606c7081c4408951432813c333585de01e88" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.712306 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:08 crc kubenswrapper[4675]: I0124 07:00:08.630167 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:00:08 crc kubenswrapper[4675]: I0124 07:00:08.630533 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:00:08 crc kubenswrapper[4675]: I0124 07:00:08.630584 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:00:08 crc kubenswrapper[4675]: I0124 07:00:08.631208 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2440d3731286e05d73f10f26f20752681529fdf7e75d4631c9fef808933d662e"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:00:08 crc kubenswrapper[4675]: I0124 07:00:08.631263 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://2440d3731286e05d73f10f26f20752681529fdf7e75d4631c9fef808933d662e" gracePeriod=600 Jan 24 07:00:09 crc kubenswrapper[4675]: I0124 07:00:09.751930 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="2440d3731286e05d73f10f26f20752681529fdf7e75d4631c9fef808933d662e" exitCode=0 Jan 24 07:00:09 crc kubenswrapper[4675]: I0124 07:00:09.751958 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"2440d3731286e05d73f10f26f20752681529fdf7e75d4631c9fef808933d662e"} Jan 24 07:00:09 crc kubenswrapper[4675]: I0124 07:00:09.752329 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"ebaeb609c0074454fae3a07713a0c14f928ac8324d172f12c2024146e541ed58"} Jan 24 07:00:09 crc kubenswrapper[4675]: I0124 07:00:09.752358 4675 scope.go:117] "RemoveContainer" containerID="c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82" Jan 24 07:02:08 crc kubenswrapper[4675]: I0124 07:02:08.630664 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:02:08 crc kubenswrapper[4675]: I0124 07:02:08.631385 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:02:38 crc kubenswrapper[4675]: I0124 07:02:38.629834 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:02:38 crc kubenswrapper[4675]: I0124 07:02:38.632372 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.542415 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-q5pvn"] Jan 24 07:02:48 crc kubenswrapper[4675]: E0124 07:02:48.543513 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2777ca-be51-4dc3-b7da-d84bd7ca16c4" containerName="collect-profiles" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.543532 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2777ca-be51-4dc3-b7da-d84bd7ca16c4" containerName="collect-profiles" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.543668 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="df2777ca-be51-4dc3-b7da-d84bd7ca16c4" containerName="collect-profiles" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.544191 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.573098 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-q5pvn"] Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672322 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrgk8\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-kube-api-access-nrgk8\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672374 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a94693ba-91c6-4366-bc2f-c67b2dbea343-registry-certificates\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672396 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-bound-sa-token\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672425 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a94693ba-91c6-4366-bc2f-c67b2dbea343-installation-pull-secrets\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672473 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-registry-tls\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672505 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a94693ba-91c6-4366-bc2f-c67b2dbea343-ca-trust-extracted\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672537 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672563 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a94693ba-91c6-4366-bc2f-c67b2dbea343-trusted-ca\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.702050 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773620 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a94693ba-91c6-4366-bc2f-c67b2dbea343-installation-pull-secrets\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773710 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-registry-tls\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773757 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a94693ba-91c6-4366-bc2f-c67b2dbea343-ca-trust-extracted\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773786 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a94693ba-91c6-4366-bc2f-c67b2dbea343-trusted-ca\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773807 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrgk8\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-kube-api-access-nrgk8\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773830 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a94693ba-91c6-4366-bc2f-c67b2dbea343-registry-certificates\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773848 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-bound-sa-token\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.774288 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a94693ba-91c6-4366-bc2f-c67b2dbea343-ca-trust-extracted\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.775227 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a94693ba-91c6-4366-bc2f-c67b2dbea343-trusted-ca\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.775554 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a94693ba-91c6-4366-bc2f-c67b2dbea343-registry-certificates\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.781301 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a94693ba-91c6-4366-bc2f-c67b2dbea343-installation-pull-secrets\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.781957 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-registry-tls\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.789205 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-bound-sa-token\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.791091 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrgk8\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-kube-api-access-nrgk8\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.861114 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:49 crc kubenswrapper[4675]: I0124 07:02:49.092934 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-q5pvn"] Jan 24 07:02:50 crc kubenswrapper[4675]: I0124 07:02:50.015284 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" event={"ID":"a94693ba-91c6-4366-bc2f-c67b2dbea343","Type":"ContainerStarted","Data":"248faf75c96272f7c3d0962643b006b8e35d82fb38af40ec30bb6c1ec025fb47"} Jan 24 07:02:50 crc kubenswrapper[4675]: I0124 07:02:50.015339 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" event={"ID":"a94693ba-91c6-4366-bc2f-c67b2dbea343","Type":"ContainerStarted","Data":"ba08d3ea5acf0f09561cb718452ab86b6da2813e31fe520132670188e6e0c591"} Jan 24 07:02:50 crc kubenswrapper[4675]: I0124 07:02:50.016184 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.630256 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.631202 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.631287 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.632320 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebaeb609c0074454fae3a07713a0c14f928ac8324d172f12c2024146e541ed58"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.632442 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://ebaeb609c0074454fae3a07713a0c14f928ac8324d172f12c2024146e541ed58" gracePeriod=600 Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.871092 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.908715 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" podStartSLOduration=20.908692328 podStartE2EDuration="20.908692328s" podCreationTimestamp="2026-01-24 07:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:02:50.053406287 +0000 UTC m=+571.349511510" watchObservedRunningTime="2026-01-24 07:03:08.908692328 +0000 UTC m=+590.204797561" Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.938945 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qkls6"] Jan 24 07:03:09 crc kubenswrapper[4675]: I0124 07:03:09.127032 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="ebaeb609c0074454fae3a07713a0c14f928ac8324d172f12c2024146e541ed58" exitCode=0 Jan 24 07:03:09 crc kubenswrapper[4675]: I0124 07:03:09.127095 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"ebaeb609c0074454fae3a07713a0c14f928ac8324d172f12c2024146e541ed58"} Jan 24 07:03:09 crc kubenswrapper[4675]: I0124 07:03:09.127434 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"ac5cd34383b94a74f69690862b304069f07aa99a5c5c4c95b3f3f978f0196984"} Jan 24 07:03:09 crc kubenswrapper[4675]: I0124 07:03:09.127461 4675 scope.go:117] "RemoveContainer" containerID="2440d3731286e05d73f10f26f20752681529fdf7e75d4631c9fef808933d662e" Jan 24 07:03:33 crc kubenswrapper[4675]: I0124 07:03:33.998033 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" podUID="ef6caa30-be9c-438c-a494-8b54b5df218c" containerName="registry" containerID="cri-o://b38f62575b27bbaf36bcbcd3b1779bbb533c0972feed363dabac23c4bdb0e727" gracePeriod=30 Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.290433 4675 generic.go:334] "Generic (PLEG): container finished" podID="ef6caa30-be9c-438c-a494-8b54b5df218c" containerID="b38f62575b27bbaf36bcbcd3b1779bbb533c0972feed363dabac23c4bdb0e727" exitCode=0 Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.290938 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" event={"ID":"ef6caa30-be9c-438c-a494-8b54b5df218c","Type":"ContainerDied","Data":"b38f62575b27bbaf36bcbcd3b1779bbb533c0972feed363dabac23c4bdb0e727"} Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.363271 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.509135 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl48c\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-kube-api-access-nl48c\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.509248 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-bound-sa-token\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.509884 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef6caa30-be9c-438c-a494-8b54b5df218c-installation-pull-secrets\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.509908 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-tls\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.509935 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-certificates\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.509981 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef6caa30-be9c-438c-a494-8b54b5df218c-ca-trust-extracted\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.510013 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-trusted-ca\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.510119 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.510815 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.512489 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.520459 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6caa30-be9c-438c-a494-8b54b5df218c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.520508 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.521148 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.530388 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-kube-api-access-nl48c" (OuterVolumeSpecName: "kube-api-access-nl48c") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "kube-api-access-nl48c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.530400 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef6caa30-be9c-438c-a494-8b54b5df218c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.533117 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612037 4675 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef6caa30-be9c-438c-a494-8b54b5df218c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612085 4675 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612103 4675 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612120 4675 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef6caa30-be9c-438c-a494-8b54b5df218c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612138 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612154 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl48c\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-kube-api-access-nl48c\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612170 4675 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:35 crc kubenswrapper[4675]: I0124 07:03:35.298286 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" event={"ID":"ef6caa30-be9c-438c-a494-8b54b5df218c","Type":"ContainerDied","Data":"26c4e5324526d12f61e99166a7be6e0bc691153acfad2f89632826b7fd39d68c"} Jan 24 07:03:35 crc kubenswrapper[4675]: I0124 07:03:35.298614 4675 scope.go:117] "RemoveContainer" containerID="b38f62575b27bbaf36bcbcd3b1779bbb533c0972feed363dabac23c4bdb0e727" Jan 24 07:03:35 crc kubenswrapper[4675]: I0124 07:03:35.298735 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 07:03:35 crc kubenswrapper[4675]: I0124 07:03:35.319051 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qkls6"] Jan 24 07:03:35 crc kubenswrapper[4675]: I0124 07:03:35.325997 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qkls6"] Jan 24 07:03:36 crc kubenswrapper[4675]: I0124 07:03:36.948288 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef6caa30-be9c-438c-a494-8b54b5df218c" path="/var/lib/kubelet/pods/ef6caa30-be9c-438c-a494-8b54b5df218c/volumes" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.067394 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k"] Jan 24 07:04:36 crc kubenswrapper[4675]: E0124 07:04:36.068305 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6caa30-be9c-438c-a494-8b54b5df218c" containerName="registry" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.068323 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6caa30-be9c-438c-a494-8b54b5df218c" containerName="registry" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.068466 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6caa30-be9c-438c-a494-8b54b5df218c" containerName="registry" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.068938 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.071458 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 24 07:04:36 crc kubenswrapper[4675]: W0124 07:04:36.071538 4675 reflector.go:561] object-"cert-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Jan 24 07:04:36 crc kubenswrapper[4675]: E0124 07:04:36.071684 4675 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.079226 4675 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-dq6k8" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.082516 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-gt7xw"] Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.083281 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gt7xw" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.086953 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lthpk"] Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.087503 4675 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lp26r" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.087762 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.088053 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzc65\" (UniqueName: \"kubernetes.io/projected/99008be6-effb-4dc7-a761-ee291c03f093-kube-api-access-fzc65\") pod \"cert-manager-cainjector-cf98fcc89-6kp8k\" (UID: \"99008be6-effb-4dc7-a761-ee291c03f093\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.088130 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xmb6\" (UniqueName: \"kubernetes.io/projected/f9d3eaae-49ca-400c-a277-bdbad7f8125a-kube-api-access-6xmb6\") pod \"cert-manager-858654f9db-gt7xw\" (UID: \"f9d3eaae-49ca-400c-a277-bdbad7f8125a\") " pod="cert-manager/cert-manager-858654f9db-gt7xw" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.090059 4675 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jjk4k" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.095047 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k"] Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.099452 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gt7xw"] Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.112849 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lthpk"] Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.189084 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xmb6\" (UniqueName: \"kubernetes.io/projected/f9d3eaae-49ca-400c-a277-bdbad7f8125a-kube-api-access-6xmb6\") pod \"cert-manager-858654f9db-gt7xw\" (UID: \"f9d3eaae-49ca-400c-a277-bdbad7f8125a\") " pod="cert-manager/cert-manager-858654f9db-gt7xw" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.189309 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzc65\" (UniqueName: \"kubernetes.io/projected/99008be6-effb-4dc7-a761-ee291c03f093-kube-api-access-fzc65\") pod \"cert-manager-cainjector-cf98fcc89-6kp8k\" (UID: \"99008be6-effb-4dc7-a761-ee291c03f093\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.290464 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7szw8\" (UniqueName: \"kubernetes.io/projected/261785a7-b436-4597-a36b-473d27769006-kube-api-access-7szw8\") pod \"cert-manager-webhook-687f57d79b-lthpk\" (UID: \"261785a7-b436-4597-a36b-473d27769006\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.391609 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7szw8\" (UniqueName: \"kubernetes.io/projected/261785a7-b436-4597-a36b-473d27769006-kube-api-access-7szw8\") pod \"cert-manager-webhook-687f57d79b-lthpk\" (UID: \"261785a7-b436-4597-a36b-473d27769006\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.204660 4675 projected.go:288] Couldn't get configMap cert-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.205136 4675 projected.go:194] Error preparing data for projected volume kube-api-access-fzc65 for pod cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k: failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.205246 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99008be6-effb-4dc7-a761-ee291c03f093-kube-api-access-fzc65 podName:99008be6-effb-4dc7-a761-ee291c03f093 nodeName:}" failed. No retries permitted until 2026-01-24 07:04:37.705209294 +0000 UTC m=+679.001314557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fzc65" (UniqueName: "kubernetes.io/projected/99008be6-effb-4dc7-a761-ee291c03f093-kube-api-access-fzc65") pod "cert-manager-cainjector-cf98fcc89-6kp8k" (UID: "99008be6-effb-4dc7-a761-ee291c03f093") : failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.205710 4675 projected.go:288] Couldn't get configMap cert-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.205791 4675 projected.go:194] Error preparing data for projected volume kube-api-access-6xmb6 for pod cert-manager/cert-manager-858654f9db-gt7xw: failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.205848 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9d3eaae-49ca-400c-a277-bdbad7f8125a-kube-api-access-6xmb6 podName:f9d3eaae-49ca-400c-a277-bdbad7f8125a nodeName:}" failed. No retries permitted until 2026-01-24 07:04:37.705829738 +0000 UTC m=+679.001935001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6xmb6" (UniqueName: "kubernetes.io/projected/f9d3eaae-49ca-400c-a277-bdbad7f8125a-kube-api-access-6xmb6") pod "cert-manager-858654f9db-gt7xw" (UID: "f9d3eaae-49ca-400c-a277-bdbad7f8125a") : failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.405087 4675 projected.go:288] Couldn't get configMap cert-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.405399 4675 projected.go:194] Error preparing data for projected volume kube-api-access-7szw8 for pod cert-manager/cert-manager-webhook-687f57d79b-lthpk: failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.405641 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/261785a7-b436-4597-a36b-473d27769006-kube-api-access-7szw8 podName:261785a7-b436-4597-a36b-473d27769006 nodeName:}" failed. No retries permitted until 2026-01-24 07:04:37.905616591 +0000 UTC m=+679.201721834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7szw8" (UniqueName: "kubernetes.io/projected/261785a7-b436-4597-a36b-473d27769006-kube-api-access-7szw8") pod "cert-manager-webhook-687f57d79b-lthpk" (UID: "261785a7-b436-4597-a36b-473d27769006") : failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.610675 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.730045 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzc65\" (UniqueName: \"kubernetes.io/projected/99008be6-effb-4dc7-a761-ee291c03f093-kube-api-access-fzc65\") pod \"cert-manager-cainjector-cf98fcc89-6kp8k\" (UID: \"99008be6-effb-4dc7-a761-ee291c03f093\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.730114 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xmb6\" (UniqueName: \"kubernetes.io/projected/f9d3eaae-49ca-400c-a277-bdbad7f8125a-kube-api-access-6xmb6\") pod \"cert-manager-858654f9db-gt7xw\" (UID: \"f9d3eaae-49ca-400c-a277-bdbad7f8125a\") " pod="cert-manager/cert-manager-858654f9db-gt7xw" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.738263 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzc65\" (UniqueName: \"kubernetes.io/projected/99008be6-effb-4dc7-a761-ee291c03f093-kube-api-access-fzc65\") pod \"cert-manager-cainjector-cf98fcc89-6kp8k\" (UID: \"99008be6-effb-4dc7-a761-ee291c03f093\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.740152 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xmb6\" (UniqueName: \"kubernetes.io/projected/f9d3eaae-49ca-400c-a277-bdbad7f8125a-kube-api-access-6xmb6\") pod \"cert-manager-858654f9db-gt7xw\" (UID: \"f9d3eaae-49ca-400c-a277-bdbad7f8125a\") " pod="cert-manager/cert-manager-858654f9db-gt7xw" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.890018 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.908699 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gt7xw" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.934502 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7szw8\" (UniqueName: \"kubernetes.io/projected/261785a7-b436-4597-a36b-473d27769006-kube-api-access-7szw8\") pod \"cert-manager-webhook-687f57d79b-lthpk\" (UID: \"261785a7-b436-4597-a36b-473d27769006\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.942181 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7szw8\" (UniqueName: \"kubernetes.io/projected/261785a7-b436-4597-a36b-473d27769006-kube-api-access-7szw8\") pod \"cert-manager-webhook-687f57d79b-lthpk\" (UID: \"261785a7-b436-4597-a36b-473d27769006\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.132560 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k"] Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.142880 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.190347 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gt7xw"] Jan 24 07:04:38 crc kubenswrapper[4675]: W0124 07:04:38.199067 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9d3eaae_49ca_400c_a277_bdbad7f8125a.slice/crio-fb357e59212e37f0e2db66f6063a5e04bfd0f6b1e643425afad86c720c313a1c WatchSource:0}: Error finding container fb357e59212e37f0e2db66f6063a5e04bfd0f6b1e643425afad86c720c313a1c: Status 404 returned error can't find the container with id fb357e59212e37f0e2db66f6063a5e04bfd0f6b1e643425afad86c720c313a1c Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.221911 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.389309 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lthpk"] Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.672679 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" event={"ID":"261785a7-b436-4597-a36b-473d27769006","Type":"ContainerStarted","Data":"00cf920023e6ffc4d6f7e24c2c2255859f6fad1341112ac91d2093dca19d93f3"} Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.674378 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gt7xw" event={"ID":"f9d3eaae-49ca-400c-a277-bdbad7f8125a","Type":"ContainerStarted","Data":"fb357e59212e37f0e2db66f6063a5e04bfd0f6b1e643425afad86c720c313a1c"} Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.676480 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" event={"ID":"99008be6-effb-4dc7-a761-ee291c03f093","Type":"ContainerStarted","Data":"7e29ea7bcdef375d45c9f209149a5bc35dd4337366a2cd2a90a3ac2abc01cad7"} Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.706995 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gt7xw" event={"ID":"f9d3eaae-49ca-400c-a277-bdbad7f8125a","Type":"ContainerStarted","Data":"5d44fe14110728fd376b736028d1e0d2642ad15c3e234d7a9e40b5223aa9f122"} Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.708822 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" event={"ID":"99008be6-effb-4dc7-a761-ee291c03f093","Type":"ContainerStarted","Data":"8192d9542a3def62268ceb888430d162711471563eb38543f8807b8ca11a2993"} Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.710941 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" event={"ID":"261785a7-b436-4597-a36b-473d27769006","Type":"ContainerStarted","Data":"5e7047018f0893b71e5e22cf91c804ab9ae3cd999814cba2e0b986229582197d"} Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.711121 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.736775 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-gt7xw" podStartSLOduration=3.288538577 podStartE2EDuration="6.736754069s" podCreationTimestamp="2026-01-24 07:04:36 +0000 UTC" firstStartedPulling="2026-01-24 07:04:38.202077429 +0000 UTC m=+679.498182652" lastFinishedPulling="2026-01-24 07:04:41.650292921 +0000 UTC m=+682.946398144" observedRunningTime="2026-01-24 07:04:42.721319957 +0000 UTC m=+684.017425210" watchObservedRunningTime="2026-01-24 07:04:42.736754069 +0000 UTC m=+684.032859292" Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.756109 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" podStartSLOduration=3.247615857 podStartE2EDuration="6.756092865s" podCreationTimestamp="2026-01-24 07:04:36 +0000 UTC" firstStartedPulling="2026-01-24 07:04:38.142649553 +0000 UTC m=+679.438754776" lastFinishedPulling="2026-01-24 07:04:41.651126521 +0000 UTC m=+682.947231784" observedRunningTime="2026-01-24 07:04:42.753096453 +0000 UTC m=+684.049201676" watchObservedRunningTime="2026-01-24 07:04:42.756092865 +0000 UTC m=+684.052198088" Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.779041 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" podStartSLOduration=3.46098482 podStartE2EDuration="6.779021349s" podCreationTimestamp="2026-01-24 07:04:36 +0000 UTC" firstStartedPulling="2026-01-24 07:04:38.395030577 +0000 UTC m=+679.691135790" lastFinishedPulling="2026-01-24 07:04:41.713067096 +0000 UTC m=+683.009172319" observedRunningTime="2026-01-24 07:04:42.775227998 +0000 UTC m=+684.071333241" watchObservedRunningTime="2026-01-24 07:04:42.779021349 +0000 UTC m=+684.075126582" Jan 24 07:04:48 crc kubenswrapper[4675]: I0124 07:04:48.224042 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.630395 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.631062 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.777926 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vsnzs"] Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.778610 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-controller" containerID="cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.778863 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-acl-logging" containerID="cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.778876 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-node" containerID="cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.779015 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.779043 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="sbdb" containerID="cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.778850 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="nbdb" containerID="cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.778878 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="northd" containerID="cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.827975 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" containerID="cri-o://7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.869192 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/2.log" Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.869985 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/1.log" Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.870053 4675 generic.go:334] "Generic (PLEG): container finished" podID="61e129ca-c9dc-4375-b373-5eec702744bd" containerID="c0cb9a228a110e81324f7b918e71c835eddfd7522602f0110befbb680a1b112b" exitCode=2 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.870085 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerDied","Data":"c0cb9a228a110e81324f7b918e71c835eddfd7522602f0110befbb680a1b112b"} Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.870121 4675 scope.go:117] "RemoveContainer" containerID="6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe" Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.870775 4675 scope.go:117] "RemoveContainer" containerID="c0cb9a228a110e81324f7b918e71c835eddfd7522602f0110befbb680a1b112b" Jan 24 07:05:08 crc kubenswrapper[4675]: E0124 07:05:08.871136 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zx9ns_openshift-multus(61e129ca-c9dc-4375-b373-5eec702744bd)\"" pod="openshift-multus/multus-zx9ns" podUID="61e129ca-c9dc-4375-b373-5eec702744bd" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.144061 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/3.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.145958 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovn-acl-logging/0.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.146313 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovn-controller/0.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.146738 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.197979 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j5cds"] Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198202 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198213 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198224 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="northd" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198231 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="northd" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198240 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198246 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198253 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-acl-logging" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198259 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-acl-logging" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198267 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-node" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198273 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-node" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198281 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198287 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198293 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198298 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198305 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="sbdb" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198311 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="sbdb" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198323 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kubecfg-setup" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198329 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kubecfg-setup" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198338 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="nbdb" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198344 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="nbdb" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198353 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198359 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198444 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198451 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198458 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198465 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-node" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198473 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="nbdb" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198483 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198491 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198497 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="northd" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198505 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="sbdb" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198544 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-acl-logging" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198641 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198647 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198654 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198660 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198766 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198773 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.200188 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304102 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-bin\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304225 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qgbz\" (UniqueName: \"kubernetes.io/projected/50a4333f-fd95-41a0-9ac8-4c21f9000870-kube-api-access-4qgbz\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304267 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-systemd-units\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304344 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-ovn-kubernetes\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304388 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-config\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304427 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-etc-openvswitch\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304461 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-slash\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304499 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-systemd\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304527 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-netns\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304559 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-var-lib-openvswitch\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304591 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-log-socket\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304631 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-ovn\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304769 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-netd\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304839 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-var-lib-cni-networks-ovn-kubernetes\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304895 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-script-lib\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304971 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovn-node-metrics-cert\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305005 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-openvswitch\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305036 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-node-log\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305065 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-kubelet\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305106 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-env-overrides\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305325 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305407 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-cni-bin\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305501 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-log-socket\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305582 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovnkube-config\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305847 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-systemd-units\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305894 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-systemd\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305924 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-ovn\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305927 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305963 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305973 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-run-ovn-kubernetes\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305937 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-node-log" (OuterVolumeSpecName: "node-log") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305987 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305989 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-slash" (OuterVolumeSpecName: "host-slash") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306009 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306019 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306036 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306068 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-slash\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306104 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-var-lib-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306133 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-node-log\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306177 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovnkube-script-lib\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306255 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-cni-netd\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306373 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovn-node-metrics-cert\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306414 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-run-netns\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306445 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-env-overrides\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306269 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306288 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306406 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306448 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306495 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306517 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306553 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-kubelet\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306601 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306637 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306664 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-log-socket" (OuterVolumeSpecName: "log-socket") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306698 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-etc-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306913 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxjcs\" (UniqueName: \"kubernetes.io/projected/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-kube-api-access-kxjcs\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306972 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306986 4675 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307014 4675 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307030 4675 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-log-socket\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307042 4675 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307053 4675 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307065 4675 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307080 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307093 4675 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307105 4675 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-node-log\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307115 4675 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307124 4675 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307134 4675 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307144 4675 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307154 4675 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307164 4675 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307173 4675 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-slash\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.311299 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.311887 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a4333f-fd95-41a0-9ac8-4c21f9000870-kube-api-access-4qgbz" (OuterVolumeSpecName: "kube-api-access-4qgbz") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "kube-api-access-4qgbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.319219 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408550 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-log-socket\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408617 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovnkube-config\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408665 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-systemd\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408695 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-ovn\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408757 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-systemd-units\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408798 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-run-ovn-kubernetes\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408834 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-slash\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408863 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408897 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-var-lib-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408923 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-node-log\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408963 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovnkube-script-lib\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408990 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-cni-netd\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409020 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovn-node-metrics-cert\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409054 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-run-netns\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409084 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-env-overrides\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409111 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-kubelet\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409157 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-etc-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409192 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxjcs\" (UniqueName: \"kubernetes.io/projected/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-kube-api-access-kxjcs\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409239 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409270 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-cni-bin\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409332 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409363 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qgbz\" (UniqueName: \"kubernetes.io/projected/50a4333f-fd95-41a0-9ac8-4c21f9000870-kube-api-access-4qgbz\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409384 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409403 4675 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409462 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-cni-bin\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409517 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-log-socket\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410393 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-run-ovn-kubernetes\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410455 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-systemd\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410480 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-ovn\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410508 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-etc-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410537 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-kubelet\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410539 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410569 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-slash\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410568 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-systemd-units\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410608 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-node-log\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410582 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-var-lib-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410640 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410668 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-run-netns\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410693 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-cni-netd\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410696 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovnkube-config\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.411381 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovnkube-script-lib\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.412299 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-env-overrides\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.414032 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovn-node-metrics-cert\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.439690 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxjcs\" (UniqueName: \"kubernetes.io/projected/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-kube-api-access-kxjcs\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.514766 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: W0124 07:05:09.545685 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd32b06ce_9a59_45f2_96dd_3c8c7cf71845.slice/crio-992ad3fe4d0c23fc55ba15cce80e0dbec199d3daa0fdeff46278a75790814b06 WatchSource:0}: Error finding container 992ad3fe4d0c23fc55ba15cce80e0dbec199d3daa0fdeff46278a75790814b06: Status 404 returned error can't find the container with id 992ad3fe4d0c23fc55ba15cce80e0dbec199d3daa0fdeff46278a75790814b06 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.874950 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/2.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.875903 4675 generic.go:334] "Generic (PLEG): container finished" podID="d32b06ce-9a59-45f2-96dd-3c8c7cf71845" containerID="28d55bfe8712002398d6bd88dcaa800ae0040b6fed4ddfae6b7d6f0582f3a179" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.875939 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerDied","Data":"28d55bfe8712002398d6bd88dcaa800ae0040b6fed4ddfae6b7d6f0582f3a179"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.875959 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"992ad3fe4d0c23fc55ba15cce80e0dbec199d3daa0fdeff46278a75790814b06"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.879272 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/3.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883114 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovn-acl-logging/0.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883551 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovn-controller/0.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883907 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883935 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883942 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883951 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883960 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883957 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883998 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884042 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884061 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884067 4675 scope.go:117] "RemoveContainer" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884073 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883969 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884134 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884150 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884169 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884241 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884255 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884155 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" exitCode=143 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884286 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884343 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884300 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" exitCode=143 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884380 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884407 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884419 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884429 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884441 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884448 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884455 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884462 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884468 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884475 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884482 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884488 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884494 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884501 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884510 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884522 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884530 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884537 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884562 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884570 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884576 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884583 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884586 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884589 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884685 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884694 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884706 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"62a8cadede7a21145e681044886ca9386d55c6d70c06dc737ae9eedf6acff8c9"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884762 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884775 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884782 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884789 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884796 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884803 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884810 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884816 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884822 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884829 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.939158 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.974443 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vsnzs"] Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.974605 4675 scope.go:117] "RemoveContainer" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.982873 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vsnzs"] Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.998683 4675 scope.go:117] "RemoveContainer" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.011252 4675 scope.go:117] "RemoveContainer" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.027253 4675 scope.go:117] "RemoveContainer" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.040075 4675 scope.go:117] "RemoveContainer" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.055234 4675 scope.go:117] "RemoveContainer" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.067513 4675 scope.go:117] "RemoveContainer" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.084469 4675 scope.go:117] "RemoveContainer" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.098578 4675 scope.go:117] "RemoveContainer" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.099007 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": container with ID starting with 7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5 not found: ID does not exist" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.099043 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} err="failed to get container status \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": rpc error: code = NotFound desc = could not find container \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": container with ID starting with 7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.099067 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.099424 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": container with ID starting with 126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea not found: ID does not exist" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.099459 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} err="failed to get container status \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": rpc error: code = NotFound desc = could not find container \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": container with ID starting with 126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.099489 4675 scope.go:117] "RemoveContainer" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.099752 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": container with ID starting with efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034 not found: ID does not exist" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.099781 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} err="failed to get container status \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": rpc error: code = NotFound desc = could not find container \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": container with ID starting with efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.099799 4675 scope.go:117] "RemoveContainer" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.100027 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": container with ID starting with 51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c not found: ID does not exist" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100048 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} err="failed to get container status \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": rpc error: code = NotFound desc = could not find container \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": container with ID starting with 51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100063 4675 scope.go:117] "RemoveContainer" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.100277 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": container with ID starting with c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365 not found: ID does not exist" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100302 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} err="failed to get container status \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": rpc error: code = NotFound desc = could not find container \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": container with ID starting with c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100320 4675 scope.go:117] "RemoveContainer" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.100486 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": container with ID starting with 3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93 not found: ID does not exist" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100510 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} err="failed to get container status \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": rpc error: code = NotFound desc = could not find container \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": container with ID starting with 3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100526 4675 scope.go:117] "RemoveContainer" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.100691 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": container with ID starting with 4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc not found: ID does not exist" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100710 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} err="failed to get container status \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": rpc error: code = NotFound desc = could not find container \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": container with ID starting with 4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100742 4675 scope.go:117] "RemoveContainer" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.100914 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": container with ID starting with 8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea not found: ID does not exist" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100933 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} err="failed to get container status \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": rpc error: code = NotFound desc = could not find container \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": container with ID starting with 8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100947 4675 scope.go:117] "RemoveContainer" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.101132 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": container with ID starting with 38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382 not found: ID does not exist" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101157 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} err="failed to get container status \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": rpc error: code = NotFound desc = could not find container \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": container with ID starting with 38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101172 4675 scope.go:117] "RemoveContainer" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.101360 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": container with ID starting with a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691 not found: ID does not exist" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101384 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} err="failed to get container status \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": rpc error: code = NotFound desc = could not find container \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": container with ID starting with a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101399 4675 scope.go:117] "RemoveContainer" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101657 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} err="failed to get container status \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": rpc error: code = NotFound desc = could not find container \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": container with ID starting with 7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101680 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101924 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} err="failed to get container status \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": rpc error: code = NotFound desc = could not find container \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": container with ID starting with 126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101944 4675 scope.go:117] "RemoveContainer" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102194 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} err="failed to get container status \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": rpc error: code = NotFound desc = could not find container \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": container with ID starting with efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102221 4675 scope.go:117] "RemoveContainer" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102491 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} err="failed to get container status \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": rpc error: code = NotFound desc = could not find container \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": container with ID starting with 51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102512 4675 scope.go:117] "RemoveContainer" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102681 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} err="failed to get container status \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": rpc error: code = NotFound desc = could not find container \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": container with ID starting with c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102698 4675 scope.go:117] "RemoveContainer" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102876 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} err="failed to get container status \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": rpc error: code = NotFound desc = could not find container \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": container with ID starting with 3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102894 4675 scope.go:117] "RemoveContainer" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103093 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} err="failed to get container status \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": rpc error: code = NotFound desc = could not find container \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": container with ID starting with 4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103121 4675 scope.go:117] "RemoveContainer" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103402 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} err="failed to get container status \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": rpc error: code = NotFound desc = could not find container \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": container with ID starting with 8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103434 4675 scope.go:117] "RemoveContainer" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103685 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} err="failed to get container status \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": rpc error: code = NotFound desc = could not find container \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": container with ID starting with 38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103705 4675 scope.go:117] "RemoveContainer" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103955 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} err="failed to get container status \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": rpc error: code = NotFound desc = could not find container \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": container with ID starting with a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103972 4675 scope.go:117] "RemoveContainer" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104180 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} err="failed to get container status \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": rpc error: code = NotFound desc = could not find container \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": container with ID starting with 7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104199 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104396 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} err="failed to get container status \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": rpc error: code = NotFound desc = could not find container \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": container with ID starting with 126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104414 4675 scope.go:117] "RemoveContainer" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104621 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} err="failed to get container status \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": rpc error: code = NotFound desc = could not find container \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": container with ID starting with efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104640 4675 scope.go:117] "RemoveContainer" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104865 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} err="failed to get container status \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": rpc error: code = NotFound desc = could not find container \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": container with ID starting with 51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104893 4675 scope.go:117] "RemoveContainer" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105086 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} err="failed to get container status \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": rpc error: code = NotFound desc = could not find container \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": container with ID starting with c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105103 4675 scope.go:117] "RemoveContainer" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105284 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} err="failed to get container status \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": rpc error: code = NotFound desc = could not find container \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": container with ID starting with 3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105307 4675 scope.go:117] "RemoveContainer" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105485 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} err="failed to get container status \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": rpc error: code = NotFound desc = could not find container \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": container with ID starting with 4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105504 4675 scope.go:117] "RemoveContainer" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105728 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} err="failed to get container status \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": rpc error: code = NotFound desc = could not find container \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": container with ID starting with 8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105752 4675 scope.go:117] "RemoveContainer" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105936 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} err="failed to get container status \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": rpc error: code = NotFound desc = could not find container \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": container with ID starting with 38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105966 4675 scope.go:117] "RemoveContainer" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.106184 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} err="failed to get container status \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": rpc error: code = NotFound desc = could not find container \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": container with ID starting with a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.106205 4675 scope.go:117] "RemoveContainer" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.106698 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} err="failed to get container status \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": rpc error: code = NotFound desc = could not find container \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": container with ID starting with 7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.106735 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.106904 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} err="failed to get container status \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": rpc error: code = NotFound desc = could not find container \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": container with ID starting with 126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.106953 4675 scope.go:117] "RemoveContainer" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.107585 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} err="failed to get container status \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": rpc error: code = NotFound desc = could not find container \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": container with ID starting with efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.107604 4675 scope.go:117] "RemoveContainer" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.107852 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} err="failed to get container status \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": rpc error: code = NotFound desc = could not find container \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": container with ID starting with 51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.107882 4675 scope.go:117] "RemoveContainer" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108072 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} err="failed to get container status \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": rpc error: code = NotFound desc = could not find container \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": container with ID starting with c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108088 4675 scope.go:117] "RemoveContainer" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108283 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} err="failed to get container status \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": rpc error: code = NotFound desc = could not find container \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": container with ID starting with 3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108301 4675 scope.go:117] "RemoveContainer" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108495 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} err="failed to get container status \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": rpc error: code = NotFound desc = could not find container \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": container with ID starting with 4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108520 4675 scope.go:117] "RemoveContainer" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108800 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} err="failed to get container status \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": rpc error: code = NotFound desc = could not find container \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": container with ID starting with 8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108826 4675 scope.go:117] "RemoveContainer" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.109034 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} err="failed to get container status \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": rpc error: code = NotFound desc = could not find container \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": container with ID starting with 38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.109054 4675 scope.go:117] "RemoveContainer" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.109276 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} err="failed to get container status \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": rpc error: code = NotFound desc = could not find container \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": container with ID starting with a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.109303 4675 scope.go:117] "RemoveContainer" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.109516 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} err="failed to get container status \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": rpc error: code = NotFound desc = could not find container \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": container with ID starting with 7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.897238 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"32d1f53835f949c1219fb31e4eebd15aedc9248e4f0dcabf280e088c1e36bfc2"} Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.897573 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"a2ecb1c37e8b54fdeea76b6616f56b47b9b82f72d373b2d4e1a58a50d673d6e8"} Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.897587 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"cf39ffe8f239aca312d4d71d9d596fa9a343774a93fdb5551eafd26495c3e374"} Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.897597 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"732d68c8d8a6059ad0930bba3f793e0f13c0a850488d53ff70735d1654f8fa0a"} Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.897607 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"8a40a66ed85e20deb131dcd8e54d3cf4a102986830a2031ae646b58bac168321"} Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.897618 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"669c27148464b5679f1099cdaaca1aac10e58f6ef2d188ed747c6f38670574ae"} Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.952355 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" path="/var/lib/kubelet/pods/50a4333f-fd95-41a0-9ac8-4c21f9000870/volumes" Jan 24 07:05:12 crc kubenswrapper[4675]: I0124 07:05:12.913045 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"07d70ccfcf72051dd4824cf8a3d5af7dd0f1a95ba7a79201917d1018950b322e"} Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.935524 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"5aeff035facd893da23f740378eec371fb91507290668ebe30c1f36601784a31"} Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.936340 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.936361 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.936523 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.967883 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" podStartSLOduration=6.967863727 podStartE2EDuration="6.967863727s" podCreationTimestamp="2026-01-24 07:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:05:15.964818184 +0000 UTC m=+717.260923407" watchObservedRunningTime="2026-01-24 07:05:15.967863727 +0000 UTC m=+717.263968960" Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.972652 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.982049 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:19 crc kubenswrapper[4675]: I0124 07:05:19.942309 4675 scope.go:117] "RemoveContainer" containerID="c0cb9a228a110e81324f7b918e71c835eddfd7522602f0110befbb680a1b112b" Jan 24 07:05:19 crc kubenswrapper[4675]: E0124 07:05:19.943171 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zx9ns_openshift-multus(61e129ca-c9dc-4375-b373-5eec702744bd)\"" pod="openshift-multus/multus-zx9ns" podUID="61e129ca-c9dc-4375-b373-5eec702744bd" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.237694 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7"] Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.239472 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.241386 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.251772 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7"] Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.344465 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.344535 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsln9\" (UniqueName: \"kubernetes.io/projected/6a14a2ad-1879-4684-b69a-64e6bebf6424-kube-api-access-fsln9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.344626 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.446703 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.446769 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsln9\" (UniqueName: \"kubernetes.io/projected/6a14a2ad-1879-4684-b69a-64e6bebf6424-kube-api-access-fsln9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.446842 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.447503 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.447539 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.466829 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsln9\" (UniqueName: \"kubernetes.io/projected/6a14a2ad-1879-4684-b69a-64e6bebf6424-kube-api-access-fsln9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.557171 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: E0124 07:05:28.590896 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(bf5c3326320d96db43098edc55efc69e0d54ce0abbfac153e78000003406959b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 07:05:28 crc kubenswrapper[4675]: E0124 07:05:28.591014 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(bf5c3326320d96db43098edc55efc69e0d54ce0abbfac153e78000003406959b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: E0124 07:05:28.591245 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(bf5c3326320d96db43098edc55efc69e0d54ce0abbfac153e78000003406959b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: E0124 07:05:28.591349 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace(6a14a2ad-1879-4684-b69a-64e6bebf6424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace(6a14a2ad-1879-4684-b69a-64e6bebf6424)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(bf5c3326320d96db43098edc55efc69e0d54ce0abbfac153e78000003406959b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" Jan 24 07:05:29 crc kubenswrapper[4675]: I0124 07:05:29.029166 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:29 crc kubenswrapper[4675]: I0124 07:05:29.029853 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:29 crc kubenswrapper[4675]: E0124 07:05:29.049964 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(958bcc7e7a3fd7c83240943ecdb524358189f25654de42c694f11d7a50d9d265): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 07:05:29 crc kubenswrapper[4675]: E0124 07:05:29.050023 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(958bcc7e7a3fd7c83240943ecdb524358189f25654de42c694f11d7a50d9d265): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:29 crc kubenswrapper[4675]: E0124 07:05:29.050046 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(958bcc7e7a3fd7c83240943ecdb524358189f25654de42c694f11d7a50d9d265): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:29 crc kubenswrapper[4675]: E0124 07:05:29.050090 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace(6a14a2ad-1879-4684-b69a-64e6bebf6424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace(6a14a2ad-1879-4684-b69a-64e6bebf6424)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(958bcc7e7a3fd7c83240943ecdb524358189f25654de42c694f11d7a50d9d265): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" Jan 24 07:05:32 crc kubenswrapper[4675]: I0124 07:05:32.946765 4675 scope.go:117] "RemoveContainer" containerID="c0cb9a228a110e81324f7b918e71c835eddfd7522602f0110befbb680a1b112b" Jan 24 07:05:34 crc kubenswrapper[4675]: I0124 07:05:34.055073 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/2.log" Jan 24 07:05:34 crc kubenswrapper[4675]: I0124 07:05:34.055424 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerStarted","Data":"4184088d4877e714961c97864e506bc2d3af178e0cb2be9b01953bb12d09d59e"} Jan 24 07:05:38 crc kubenswrapper[4675]: I0124 07:05:38.629465 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:05:38 crc kubenswrapper[4675]: I0124 07:05:38.629946 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:05:39 crc kubenswrapper[4675]: I0124 07:05:39.540791 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:42 crc kubenswrapper[4675]: I0124 07:05:42.942437 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:42 crc kubenswrapper[4675]: I0124 07:05:42.943420 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:43 crc kubenswrapper[4675]: I0124 07:05:43.436676 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7"] Jan 24 07:05:43 crc kubenswrapper[4675]: W0124 07:05:43.447111 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a14a2ad_1879_4684_b69a_64e6bebf6424.slice/crio-98deb0be21b60b3a9289edfcf4b738e5957122111f719521950f6628fc30fdb8 WatchSource:0}: Error finding container 98deb0be21b60b3a9289edfcf4b738e5957122111f719521950f6628fc30fdb8: Status 404 returned error can't find the container with id 98deb0be21b60b3a9289edfcf4b738e5957122111f719521950f6628fc30fdb8 Jan 24 07:05:44 crc kubenswrapper[4675]: I0124 07:05:44.129302 4675 generic.go:334] "Generic (PLEG): container finished" podID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerID="4f6a97c6d78d429cb4ec7577903962e3f8536d297ac20945806a8956224a4cb9" exitCode=0 Jan 24 07:05:44 crc kubenswrapper[4675]: I0124 07:05:44.129646 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" event={"ID":"6a14a2ad-1879-4684-b69a-64e6bebf6424","Type":"ContainerDied","Data":"4f6a97c6d78d429cb4ec7577903962e3f8536d297ac20945806a8956224a4cb9"} Jan 24 07:05:44 crc kubenswrapper[4675]: I0124 07:05:44.129687 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" event={"ID":"6a14a2ad-1879-4684-b69a-64e6bebf6424","Type":"ContainerStarted","Data":"98deb0be21b60b3a9289edfcf4b738e5957122111f719521950f6628fc30fdb8"} Jan 24 07:05:46 crc kubenswrapper[4675]: I0124 07:05:46.142321 4675 generic.go:334] "Generic (PLEG): container finished" podID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerID="9cc2e5d414d97844e8b7e3f510160406460833ce1b2a12f2b679a5ddb0c7ef9f" exitCode=0 Jan 24 07:05:46 crc kubenswrapper[4675]: I0124 07:05:46.142376 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" event={"ID":"6a14a2ad-1879-4684-b69a-64e6bebf6424","Type":"ContainerDied","Data":"9cc2e5d414d97844e8b7e3f510160406460833ce1b2a12f2b679a5ddb0c7ef9f"} Jan 24 07:05:47 crc kubenswrapper[4675]: I0124 07:05:47.153830 4675 generic.go:334] "Generic (PLEG): container finished" podID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerID="3bb88a6fd7112c2dfd38ae4a1a3632267eb02e47e5f3a30268c4dcce60f04cb5" exitCode=0 Jan 24 07:05:47 crc kubenswrapper[4675]: I0124 07:05:47.153903 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" event={"ID":"6a14a2ad-1879-4684-b69a-64e6bebf6424","Type":"ContainerDied","Data":"3bb88a6fd7112c2dfd38ae4a1a3632267eb02e47e5f3a30268c4dcce60f04cb5"} Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.406085 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.517106 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsln9\" (UniqueName: \"kubernetes.io/projected/6a14a2ad-1879-4684-b69a-64e6bebf6424-kube-api-access-fsln9\") pod \"6a14a2ad-1879-4684-b69a-64e6bebf6424\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.517345 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-util\") pod \"6a14a2ad-1879-4684-b69a-64e6bebf6424\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.517446 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-bundle\") pod \"6a14a2ad-1879-4684-b69a-64e6bebf6424\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.518753 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-bundle" (OuterVolumeSpecName: "bundle") pod "6a14a2ad-1879-4684-b69a-64e6bebf6424" (UID: "6a14a2ad-1879-4684-b69a-64e6bebf6424"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.527001 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a14a2ad-1879-4684-b69a-64e6bebf6424-kube-api-access-fsln9" (OuterVolumeSpecName: "kube-api-access-fsln9") pod "6a14a2ad-1879-4684-b69a-64e6bebf6424" (UID: "6a14a2ad-1879-4684-b69a-64e6bebf6424"). InnerVolumeSpecName "kube-api-access-fsln9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.537871 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-util" (OuterVolumeSpecName: "util") pod "6a14a2ad-1879-4684-b69a-64e6bebf6424" (UID: "6a14a2ad-1879-4684-b69a-64e6bebf6424"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.619257 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsln9\" (UniqueName: \"kubernetes.io/projected/6a14a2ad-1879-4684-b69a-64e6bebf6424-kube-api-access-fsln9\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.619310 4675 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-util\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.619332 4675 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:49 crc kubenswrapper[4675]: I0124 07:05:49.189904 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" event={"ID":"6a14a2ad-1879-4684-b69a-64e6bebf6424","Type":"ContainerDied","Data":"98deb0be21b60b3a9289edfcf4b738e5957122111f719521950f6628fc30fdb8"} Jan 24 07:05:49 crc kubenswrapper[4675]: I0124 07:05:49.189942 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98deb0be21b60b3a9289edfcf4b738e5957122111f719521950f6628fc30fdb8" Jan 24 07:05:49 crc kubenswrapper[4675]: I0124 07:05:49.189954 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:49 crc kubenswrapper[4675]: I0124 07:05:49.238036 4675 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 24 07:05:54 crc kubenswrapper[4675]: I0124 07:05:54.998189 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dm24p"] Jan 24 07:05:54 crc kubenswrapper[4675]: E0124 07:05:54.998801 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="util" Jan 24 07:05:54 crc kubenswrapper[4675]: I0124 07:05:54.998817 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="util" Jan 24 07:05:54 crc kubenswrapper[4675]: E0124 07:05:54.998831 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="pull" Jan 24 07:05:54 crc kubenswrapper[4675]: I0124 07:05:54.998839 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="pull" Jan 24 07:05:54 crc kubenswrapper[4675]: E0124 07:05:54.998851 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="extract" Jan 24 07:05:54 crc kubenswrapper[4675]: I0124 07:05:54.998859 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="extract" Jan 24 07:05:54 crc kubenswrapper[4675]: I0124 07:05:54.998968 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="extract" Jan 24 07:05:54 crc kubenswrapper[4675]: I0124 07:05:54.999473 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.002177 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.002405 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.002554 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-j9hkt" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.004322 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sjjw\" (UniqueName: \"kubernetes.io/projected/b344cabd-3dd6-4691-990b-045aaf4c622f-kube-api-access-5sjjw\") pod \"nmstate-operator-646758c888-dm24p\" (UID: \"b344cabd-3dd6-4691-990b-045aaf4c622f\") " pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.056769 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dm24p"] Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.105542 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sjjw\" (UniqueName: \"kubernetes.io/projected/b344cabd-3dd6-4691-990b-045aaf4c622f-kube-api-access-5sjjw\") pod \"nmstate-operator-646758c888-dm24p\" (UID: \"b344cabd-3dd6-4691-990b-045aaf4c622f\") " pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.121516 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sjjw\" (UniqueName: \"kubernetes.io/projected/b344cabd-3dd6-4691-990b-045aaf4c622f-kube-api-access-5sjjw\") pod \"nmstate-operator-646758c888-dm24p\" (UID: \"b344cabd-3dd6-4691-990b-045aaf4c622f\") " pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.315849 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.516237 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dm24p"] Jan 24 07:05:56 crc kubenswrapper[4675]: I0124 07:05:56.226963 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" event={"ID":"b344cabd-3dd6-4691-990b-045aaf4c622f","Type":"ContainerStarted","Data":"e80042bebb2fa475b623cf2d88b10c10af35fde26e67979d92737a7f15e88e5b"} Jan 24 07:05:58 crc kubenswrapper[4675]: I0124 07:05:58.240141 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" event={"ID":"b344cabd-3dd6-4691-990b-045aaf4c622f","Type":"ContainerStarted","Data":"93cc2bf4ecde42a85f53b763a3efb7d54c8a771a3d936f169c6e1aa5ff7efe77"} Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.249302 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" podStartSLOduration=3.206494882 podStartE2EDuration="5.249282092s" podCreationTimestamp="2026-01-24 07:05:54 +0000 UTC" firstStartedPulling="2026-01-24 07:05:55.525488244 +0000 UTC m=+756.821593467" lastFinishedPulling="2026-01-24 07:05:57.568275454 +0000 UTC m=+758.864380677" observedRunningTime="2026-01-24 07:05:58.255995534 +0000 UTC m=+759.552100787" watchObservedRunningTime="2026-01-24 07:05:59.249282092 +0000 UTC m=+760.545387325" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.266593 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-c56d8"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.268977 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.272447 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.273468 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.282786 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-c56d8"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.292666 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.308262 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gw8vt" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.335115 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ljst6"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.335971 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.363852 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365404 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-nmstate-lock\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365480 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-dbus-socket\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365508 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/469eb31f-c261-4d7f-8a12-c10ed969bd55-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-77dfm\" (UID: \"469eb31f-c261-4d7f-8a12-c10ed969bd55\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365531 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqqlx\" (UniqueName: \"kubernetes.io/projected/56a6d660-7a53-4b25-b4e4-3d3f97a67430-kube-api-access-dqqlx\") pod \"nmstate-metrics-54757c584b-c56d8\" (UID: \"56a6d660-7a53-4b25-b4e4-3d3f97a67430\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365553 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59pr\" (UniqueName: \"kubernetes.io/projected/8c82b668-f857-4de6-a938-333a7e44591f-kube-api-access-h59pr\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365608 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-ovs-socket\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365638 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgr4c\" (UniqueName: \"kubernetes.io/projected/469eb31f-c261-4d7f-8a12-c10ed969bd55-kube-api-access-dgr4c\") pod \"nmstate-webhook-8474b5b9d8-77dfm\" (UID: \"469eb31f-c261-4d7f-8a12-c10ed969bd55\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466205 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-dbus-socket\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466244 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/469eb31f-c261-4d7f-8a12-c10ed969bd55-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-77dfm\" (UID: \"469eb31f-c261-4d7f-8a12-c10ed969bd55\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466264 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqqlx\" (UniqueName: \"kubernetes.io/projected/56a6d660-7a53-4b25-b4e4-3d3f97a67430-kube-api-access-dqqlx\") pod \"nmstate-metrics-54757c584b-c56d8\" (UID: \"56a6d660-7a53-4b25-b4e4-3d3f97a67430\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466281 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h59pr\" (UniqueName: \"kubernetes.io/projected/8c82b668-f857-4de6-a938-333a7e44591f-kube-api-access-h59pr\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466309 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-ovs-socket\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466455 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-ovs-socket\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466465 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-dbus-socket\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466578 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgr4c\" (UniqueName: \"kubernetes.io/projected/469eb31f-c261-4d7f-8a12-c10ed969bd55-kube-api-access-dgr4c\") pod \"nmstate-webhook-8474b5b9d8-77dfm\" (UID: \"469eb31f-c261-4d7f-8a12-c10ed969bd55\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466815 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-nmstate-lock\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.467033 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-nmstate-lock\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.474824 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/469eb31f-c261-4d7f-8a12-c10ed969bd55-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-77dfm\" (UID: \"469eb31f-c261-4d7f-8a12-c10ed969bd55\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.486444 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgr4c\" (UniqueName: \"kubernetes.io/projected/469eb31f-c261-4d7f-8a12-c10ed969bd55-kube-api-access-dgr4c\") pod \"nmstate-webhook-8474b5b9d8-77dfm\" (UID: \"469eb31f-c261-4d7f-8a12-c10ed969bd55\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.490402 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqqlx\" (UniqueName: \"kubernetes.io/projected/56a6d660-7a53-4b25-b4e4-3d3f97a67430-kube-api-access-dqqlx\") pod \"nmstate-metrics-54757c584b-c56d8\" (UID: \"56a6d660-7a53-4b25-b4e4-3d3f97a67430\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.491515 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.492150 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.493788 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h59pr\" (UniqueName: \"kubernetes.io/projected/8c82b668-f857-4de6-a938-333a7e44591f-kube-api-access-h59pr\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.495937 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.497714 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-95qzk" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.512071 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.517767 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.567429 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b289d862-4851-4f88-9a5b-4bed8cd70bd8-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.567479 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b289d862-4851-4f88-9a5b-4bed8cd70bd8-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.567532 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6nq\" (UniqueName: \"kubernetes.io/projected/b289d862-4851-4f88-9a5b-4bed8cd70bd8-kube-api-access-8d6nq\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.592382 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.612519 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.652133 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.677337 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d6nq\" (UniqueName: \"kubernetes.io/projected/b289d862-4851-4f88-9a5b-4bed8cd70bd8-kube-api-access-8d6nq\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.677402 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b289d862-4851-4f88-9a5b-4bed8cd70bd8-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.677436 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b289d862-4851-4f88-9a5b-4bed8cd70bd8-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: E0124 07:05:59.677544 4675 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 24 07:05:59 crc kubenswrapper[4675]: E0124 07:05:59.677595 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b289d862-4851-4f88-9a5b-4bed8cd70bd8-plugin-serving-cert podName:b289d862-4851-4f88-9a5b-4bed8cd70bd8 nodeName:}" failed. No retries permitted until 2026-01-24 07:06:00.177577804 +0000 UTC m=+761.473683027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/b289d862-4851-4f88-9a5b-4bed8cd70bd8-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-szblh" (UID: "b289d862-4851-4f88-9a5b-4bed8cd70bd8") : secret "plugin-serving-cert" not found Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.678463 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b289d862-4851-4f88-9a5b-4bed8cd70bd8-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.698646 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-cbdf5f797-nnh6k"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.699247 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.723572 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d6nq\" (UniqueName: \"kubernetes.io/projected/b289d862-4851-4f88-9a5b-4bed8cd70bd8-kube-api-access-8d6nq\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.727149 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cbdf5f797-nnh6k"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.881928 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-oauth-serving-cert\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.882260 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-oauth-config\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.882287 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-service-ca\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.883058 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-config\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.883124 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-serving-cert\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.883166 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-trusted-ca-bundle\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.883183 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8q5\" (UniqueName: \"kubernetes.io/projected/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-kube-api-access-4m8q5\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.905542 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-c56d8"] Jan 24 07:05:59 crc kubenswrapper[4675]: W0124 07:05:59.913814 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56a6d660_7a53_4b25_b4e4_3d3f97a67430.slice/crio-1377524cf636e8f6d143a952530e98f389176861fd1b82a9ad828581564c9881 WatchSource:0}: Error finding container 1377524cf636e8f6d143a952530e98f389176861fd1b82a9ad828581564c9881: Status 404 returned error can't find the container with id 1377524cf636e8f6d143a952530e98f389176861fd1b82a9ad828581564c9881 Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.971585 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984150 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-oauth-config\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984205 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-service-ca\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984249 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-config\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984281 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-serving-cert\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984305 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-trusted-ca-bundle\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984320 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8q5\" (UniqueName: \"kubernetes.io/projected/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-kube-api-access-4m8q5\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984381 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-oauth-serving-cert\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.985398 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-oauth-serving-cert\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.985917 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-config\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.987265 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-service-ca\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.987418 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-trusted-ca-bundle\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.990816 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-serving-cert\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.991075 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-oauth-config\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.003123 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8q5\" (UniqueName: \"kubernetes.io/projected/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-kube-api-access-4m8q5\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.025077 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.190591 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b289d862-4851-4f88-9a5b-4bed8cd70bd8-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.194320 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b289d862-4851-4f88-9a5b-4bed8cd70bd8-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.197796 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cbdf5f797-nnh6k"] Jan 24 07:06:00 crc kubenswrapper[4675]: W0124 07:06:00.200504 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d59c29c_83d5_481c_bc19_e0bb7acf8b75.slice/crio-4e6cd223d667417f51b5c7e7dbd87d7bdb3caec5f6da3dae3ae4067533c9d9b2 WatchSource:0}: Error finding container 4e6cd223d667417f51b5c7e7dbd87d7bdb3caec5f6da3dae3ae4067533c9d9b2: Status 404 returned error can't find the container with id 4e6cd223d667417f51b5c7e7dbd87d7bdb3caec5f6da3dae3ae4067533c9d9b2 Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.250026 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ljst6" event={"ID":"8c82b668-f857-4de6-a938-333a7e44591f","Type":"ContainerStarted","Data":"1ed2ebcc10564f7083d3d1fbcbfeee9e607764c4e45150c9cad5204ec5fe371a"} Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.253239 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" event={"ID":"469eb31f-c261-4d7f-8a12-c10ed969bd55","Type":"ContainerStarted","Data":"6f5d82e3dcd8c7e1c8ffbdb333466eb1cfdb33888a9242a76f69da6c1899d2fa"} Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.255999 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cbdf5f797-nnh6k" event={"ID":"0d59c29c-83d5-481c-bc19-e0bb7acf8b75","Type":"ContainerStarted","Data":"4e6cd223d667417f51b5c7e7dbd87d7bdb3caec5f6da3dae3ae4067533c9d9b2"} Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.257054 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" event={"ID":"56a6d660-7a53-4b25-b4e4-3d3f97a67430","Type":"ContainerStarted","Data":"1377524cf636e8f6d143a952530e98f389176861fd1b82a9ad828581564c9881"} Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.451618 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.673437 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh"] Jan 24 07:06:00 crc kubenswrapper[4675]: W0124 07:06:00.680017 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb289d862_4851_4f88_9a5b_4bed8cd70bd8.slice/crio-5f907246ea3ce984aa71314df71bbd303fea3e75635c4a15ef5c4bf85d464455 WatchSource:0}: Error finding container 5f907246ea3ce984aa71314df71bbd303fea3e75635c4a15ef5c4bf85d464455: Status 404 returned error can't find the container with id 5f907246ea3ce984aa71314df71bbd303fea3e75635c4a15ef5c4bf85d464455 Jan 24 07:06:01 crc kubenswrapper[4675]: I0124 07:06:01.265795 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cbdf5f797-nnh6k" event={"ID":"0d59c29c-83d5-481c-bc19-e0bb7acf8b75","Type":"ContainerStarted","Data":"e4a7622f6e4eb44ef5754ff21466c99266d679e1006f7833d5365c94ef1d5985"} Jan 24 07:06:01 crc kubenswrapper[4675]: I0124 07:06:01.269453 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" event={"ID":"b289d862-4851-4f88-9a5b-4bed8cd70bd8","Type":"ContainerStarted","Data":"5f907246ea3ce984aa71314df71bbd303fea3e75635c4a15ef5c4bf85d464455"} Jan 24 07:06:01 crc kubenswrapper[4675]: I0124 07:06:01.288672 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-cbdf5f797-nnh6k" podStartSLOduration=2.288651831 podStartE2EDuration="2.288651831s" podCreationTimestamp="2026-01-24 07:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:06:01.285773622 +0000 UTC m=+762.581878875" watchObservedRunningTime="2026-01-24 07:06:01.288651831 +0000 UTC m=+762.584757054" Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.282658 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" event={"ID":"b289d862-4851-4f88-9a5b-4bed8cd70bd8","Type":"ContainerStarted","Data":"e17a7392a9ee8fad75e776d2951ae1b6298562f9d3b6459ece24cd0935e32fac"} Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.285659 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" event={"ID":"56a6d660-7a53-4b25-b4e4-3d3f97a67430","Type":"ContainerStarted","Data":"4d16f7493074f5a11f41f156db582a7db56f8501907befe873d2395a86c85355"} Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.286984 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ljst6" event={"ID":"8c82b668-f857-4de6-a938-333a7e44591f","Type":"ContainerStarted","Data":"c7f88f371c5e75739d4c674c37c8c6835a48ff0036fc48e77724a8426988b56a"} Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.287107 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.288313 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" event={"ID":"469eb31f-c261-4d7f-8a12-c10ed969bd55","Type":"ContainerStarted","Data":"2204e60210d282b694b456e044d53d4943e4d000d7bd0546023bb2fbec57f988"} Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.288475 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.299677 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" podStartSLOduration=2.015700271 podStartE2EDuration="4.299654512s" podCreationTimestamp="2026-01-24 07:05:59 +0000 UTC" firstStartedPulling="2026-01-24 07:06:00.681966668 +0000 UTC m=+761.978071891" lastFinishedPulling="2026-01-24 07:06:02.965920909 +0000 UTC m=+764.262026132" observedRunningTime="2026-01-24 07:06:03.297779357 +0000 UTC m=+764.593884580" watchObservedRunningTime="2026-01-24 07:06:03.299654512 +0000 UTC m=+764.595759735" Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.357219 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" podStartSLOduration=1.311132488 podStartE2EDuration="4.357203397s" podCreationTimestamp="2026-01-24 07:05:59 +0000 UTC" firstStartedPulling="2026-01-24 07:05:59.973863631 +0000 UTC m=+761.269968854" lastFinishedPulling="2026-01-24 07:06:03.01993452 +0000 UTC m=+764.316039763" observedRunningTime="2026-01-24 07:06:03.353444307 +0000 UTC m=+764.649549530" watchObservedRunningTime="2026-01-24 07:06:03.357203397 +0000 UTC m=+764.653308620" Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.358151 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ljst6" podStartSLOduration=1.131837236 podStartE2EDuration="4.35814388s" podCreationTimestamp="2026-01-24 07:05:59 +0000 UTC" firstStartedPulling="2026-01-24 07:05:59.738002977 +0000 UTC m=+761.034108200" lastFinishedPulling="2026-01-24 07:06:02.964309611 +0000 UTC m=+764.260414844" observedRunningTime="2026-01-24 07:06:03.326915743 +0000 UTC m=+764.623020966" watchObservedRunningTime="2026-01-24 07:06:03.35814388 +0000 UTC m=+764.654249123" Jan 24 07:06:06 crc kubenswrapper[4675]: I0124 07:06:06.351970 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" event={"ID":"56a6d660-7a53-4b25-b4e4-3d3f97a67430","Type":"ContainerStarted","Data":"90567a0c179975879e6cd00341195750d2d56cdff6c343f397abb1fedf9255e2"} Jan 24 07:06:06 crc kubenswrapper[4675]: I0124 07:06:06.371147 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" podStartSLOduration=1.968620694 podStartE2EDuration="7.371122476s" podCreationTimestamp="2026-01-24 07:05:59 +0000 UTC" firstStartedPulling="2026-01-24 07:05:59.916382748 +0000 UTC m=+761.212487961" lastFinishedPulling="2026-01-24 07:06:05.31888452 +0000 UTC m=+766.614989743" observedRunningTime="2026-01-24 07:06:06.368860363 +0000 UTC m=+767.664965636" watchObservedRunningTime="2026-01-24 07:06:06.371122476 +0000 UTC m=+767.667227719" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.690428 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v27xv"] Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.691741 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.711225 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v27xv"] Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.781712 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qvvm\" (UniqueName: \"kubernetes.io/projected/7d389362-a217-4c05-9d80-ed31811768dc-kube-api-access-6qvvm\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.781848 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-catalog-content\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.781895 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-utilities\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.883393 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-catalog-content\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.883496 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-utilities\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.884347 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-utilities\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.885179 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-catalog-content\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.885254 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qvvm\" (UniqueName: \"kubernetes.io/projected/7d389362-a217-4c05-9d80-ed31811768dc-kube-api-access-6qvvm\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.904762 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qvvm\" (UniqueName: \"kubernetes.io/projected/7d389362-a217-4c05-9d80-ed31811768dc-kube-api-access-6qvvm\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.017583 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.339611 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v27xv"] Jan 24 07:06:08 crc kubenswrapper[4675]: W0124 07:06:08.344900 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d389362_a217_4c05_9d80_ed31811768dc.slice/crio-fa15feb38b16773851020c7922510f5f3d3e5af4863f6130648f5b5ef270dbdb WatchSource:0}: Error finding container fa15feb38b16773851020c7922510f5f3d3e5af4863f6130648f5b5ef270dbdb: Status 404 returned error can't find the container with id fa15feb38b16773851020c7922510f5f3d3e5af4863f6130648f5b5ef270dbdb Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.366686 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerStarted","Data":"fa15feb38b16773851020c7922510f5f3d3e5af4863f6130648f5b5ef270dbdb"} Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.630387 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.630685 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.630753 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.631308 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac5cd34383b94a74f69690862b304069f07aa99a5c5c4c95b3f3f978f0196984"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.631364 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://ac5cd34383b94a74f69690862b304069f07aa99a5c5c4c95b3f3f978f0196984" gracePeriod=600 Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.373493 4675 generic.go:334] "Generic (PLEG): container finished" podID="7d389362-a217-4c05-9d80-ed31811768dc" containerID="e194927f2a604d7b81f11294d797a1da906415ab50991bcb93caea8c2a7657e2" exitCode=0 Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.373573 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerDied","Data":"e194927f2a604d7b81f11294d797a1da906415ab50991bcb93caea8c2a7657e2"} Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.378753 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="ac5cd34383b94a74f69690862b304069f07aa99a5c5c4c95b3f3f978f0196984" exitCode=0 Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.378793 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"ac5cd34383b94a74f69690862b304069f07aa99a5c5c4c95b3f3f978f0196984"} Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.378825 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"9ae90be563283b996d1b10bf3ad8715e03978ae7930422faef174e860a3bf62d"} Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.378881 4675 scope.go:117] "RemoveContainer" containerID="ebaeb609c0074454fae3a07713a0c14f928ac8324d172f12c2024146e541ed58" Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.677838 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:06:10 crc kubenswrapper[4675]: I0124 07:06:10.025253 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:10 crc kubenswrapper[4675]: I0124 07:06:10.025525 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:10 crc kubenswrapper[4675]: I0124 07:06:10.030729 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:10 crc kubenswrapper[4675]: I0124 07:06:10.389412 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerStarted","Data":"14bdb8046df62e42fedba183978f222d84eb3b834d5444fbbef0c68556721d99"} Jan 24 07:06:10 crc kubenswrapper[4675]: I0124 07:06:10.406392 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:10 crc kubenswrapper[4675]: I0124 07:06:10.465005 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c64jl"] Jan 24 07:06:11 crc kubenswrapper[4675]: I0124 07:06:11.409343 4675 generic.go:334] "Generic (PLEG): container finished" podID="7d389362-a217-4c05-9d80-ed31811768dc" containerID="14bdb8046df62e42fedba183978f222d84eb3b834d5444fbbef0c68556721d99" exitCode=0 Jan 24 07:06:11 crc kubenswrapper[4675]: I0124 07:06:11.410011 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerDied","Data":"14bdb8046df62e42fedba183978f222d84eb3b834d5444fbbef0c68556721d99"} Jan 24 07:06:11 crc kubenswrapper[4675]: I0124 07:06:11.410063 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerStarted","Data":"4e9e6f8c749f2e988430076a77477f784428b97da668e3b7be0c788942d51e26"} Jan 24 07:06:11 crc kubenswrapper[4675]: I0124 07:06:11.439403 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v27xv" podStartSLOduration=3.055744939 podStartE2EDuration="4.439351322s" podCreationTimestamp="2026-01-24 07:06:07 +0000 UTC" firstStartedPulling="2026-01-24 07:06:09.378306475 +0000 UTC m=+770.674411698" lastFinishedPulling="2026-01-24 07:06:10.761912858 +0000 UTC m=+772.058018081" observedRunningTime="2026-01-24 07:06:11.437667002 +0000 UTC m=+772.733772225" watchObservedRunningTime="2026-01-24 07:06:11.439351322 +0000 UTC m=+772.735456545" Jan 24 07:06:15 crc kubenswrapper[4675]: I0124 07:06:15.879927 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jvpcf"] Jan 24 07:06:15 crc kubenswrapper[4675]: I0124 07:06:15.883883 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:15 crc kubenswrapper[4675]: I0124 07:06:15.888900 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvpcf"] Jan 24 07:06:15 crc kubenswrapper[4675]: I0124 07:06:15.910590 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-utilities\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:15 crc kubenswrapper[4675]: I0124 07:06:15.910761 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-catalog-content\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:15 crc kubenswrapper[4675]: I0124 07:06:15.910880 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4hpl\" (UniqueName: \"kubernetes.io/projected/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-kube-api-access-g4hpl\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.012124 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-catalog-content\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.012528 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4hpl\" (UniqueName: \"kubernetes.io/projected/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-kube-api-access-g4hpl\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.012660 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-catalog-content\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.012794 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-utilities\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.013066 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-utilities\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.030897 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4hpl\" (UniqueName: \"kubernetes.io/projected/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-kube-api-access-g4hpl\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.208231 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.643700 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvpcf"] Jan 24 07:06:17 crc kubenswrapper[4675]: I0124 07:06:17.445222 4675 generic.go:334] "Generic (PLEG): container finished" podID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerID="1d98d86bba040151e8b910b0f27ae9bbdfed1abab02298643071d56d97f908e2" exitCode=0 Jan 24 07:06:17 crc kubenswrapper[4675]: I0124 07:06:17.445303 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerDied","Data":"1d98d86bba040151e8b910b0f27ae9bbdfed1abab02298643071d56d97f908e2"} Jan 24 07:06:17 crc kubenswrapper[4675]: I0124 07:06:17.445459 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerStarted","Data":"c409d86ef591245d15fd8a59452d9dc84cc21a8a336c9225e59ad1d8785554a8"} Jan 24 07:06:18 crc kubenswrapper[4675]: I0124 07:06:18.017962 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:18 crc kubenswrapper[4675]: I0124 07:06:18.018159 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:18 crc kubenswrapper[4675]: I0124 07:06:18.070752 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:18 crc kubenswrapper[4675]: I0124 07:06:18.455138 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerStarted","Data":"edde818446df1a2eddfa227f57318e821f8f9185b605f45d799771d126c45a99"} Jan 24 07:06:18 crc kubenswrapper[4675]: I0124 07:06:18.510912 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:19 crc kubenswrapper[4675]: I0124 07:06:19.463829 4675 generic.go:334] "Generic (PLEG): container finished" podID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerID="edde818446df1a2eddfa227f57318e821f8f9185b605f45d799771d126c45a99" exitCode=0 Jan 24 07:06:19 crc kubenswrapper[4675]: I0124 07:06:19.463918 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerDied","Data":"edde818446df1a2eddfa227f57318e821f8f9185b605f45d799771d126c45a99"} Jan 24 07:06:19 crc kubenswrapper[4675]: I0124 07:06:19.618799 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:06:20 crc kubenswrapper[4675]: I0124 07:06:20.472149 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerStarted","Data":"df4378cbe34ce1ff1d9b23ad146643fab41006a05293fb890a58de7617cc3b7f"} Jan 24 07:06:20 crc kubenswrapper[4675]: I0124 07:06:20.494109 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jvpcf" podStartSLOduration=3.088694909 podStartE2EDuration="5.494092052s" podCreationTimestamp="2026-01-24 07:06:15 +0000 UTC" firstStartedPulling="2026-01-24 07:06:17.44748943 +0000 UTC m=+778.743594653" lastFinishedPulling="2026-01-24 07:06:19.852886573 +0000 UTC m=+781.148991796" observedRunningTime="2026-01-24 07:06:20.492233038 +0000 UTC m=+781.788338261" watchObservedRunningTime="2026-01-24 07:06:20.494092052 +0000 UTC m=+781.790197275" Jan 24 07:06:21 crc kubenswrapper[4675]: I0124 07:06:21.672644 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v27xv"] Jan 24 07:06:21 crc kubenswrapper[4675]: I0124 07:06:21.672901 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v27xv" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="registry-server" containerID="cri-o://4e9e6f8c749f2e988430076a77477f784428b97da668e3b7be0c788942d51e26" gracePeriod=2 Jan 24 07:06:24 crc kubenswrapper[4675]: I0124 07:06:24.493266 4675 generic.go:334] "Generic (PLEG): container finished" podID="7d389362-a217-4c05-9d80-ed31811768dc" containerID="4e9e6f8c749f2e988430076a77477f784428b97da668e3b7be0c788942d51e26" exitCode=0 Jan 24 07:06:24 crc kubenswrapper[4675]: I0124 07:06:24.493353 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerDied","Data":"4e9e6f8c749f2e988430076a77477f784428b97da668e3b7be0c788942d51e26"} Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.287666 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qsnl7"] Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.288778 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.345233 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-utilities\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.345292 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-catalog-content\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.345364 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wpk8\" (UniqueName: \"kubernetes.io/projected/4941a74b-f8db-4960-a1d7-7585b2099620-kube-api-access-5wpk8\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.354553 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsnl7"] Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.446323 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-utilities\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.446690 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-catalog-content\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.446874 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wpk8\" (UniqueName: \"kubernetes.io/projected/4941a74b-f8db-4960-a1d7-7585b2099620-kube-api-access-5wpk8\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.447331 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-utilities\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.447465 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-catalog-content\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.469962 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.470376 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wpk8\" (UniqueName: \"kubernetes.io/projected/4941a74b-f8db-4960-a1d7-7585b2099620-kube-api-access-5wpk8\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.500095 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerDied","Data":"fa15feb38b16773851020c7922510f5f3d3e5af4863f6130648f5b5ef270dbdb"} Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.500139 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.500149 4675 scope.go:117] "RemoveContainer" containerID="4e9e6f8c749f2e988430076a77477f784428b97da668e3b7be0c788942d51e26" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.530671 4675 scope.go:117] "RemoveContainer" containerID="14bdb8046df62e42fedba183978f222d84eb3b834d5444fbbef0c68556721d99" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.548256 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-utilities\") pod \"7d389362-a217-4c05-9d80-ed31811768dc\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.548308 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-catalog-content\") pod \"7d389362-a217-4c05-9d80-ed31811768dc\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.548341 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qvvm\" (UniqueName: \"kubernetes.io/projected/7d389362-a217-4c05-9d80-ed31811768dc-kube-api-access-6qvvm\") pod \"7d389362-a217-4c05-9d80-ed31811768dc\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.549814 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-utilities" (OuterVolumeSpecName: "utilities") pod "7d389362-a217-4c05-9d80-ed31811768dc" (UID: "7d389362-a217-4c05-9d80-ed31811768dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.560537 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d389362-a217-4c05-9d80-ed31811768dc-kube-api-access-6qvvm" (OuterVolumeSpecName: "kube-api-access-6qvvm") pod "7d389362-a217-4c05-9d80-ed31811768dc" (UID: "7d389362-a217-4c05-9d80-ed31811768dc"). InnerVolumeSpecName "kube-api-access-6qvvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.564897 4675 scope.go:117] "RemoveContainer" containerID="e194927f2a604d7b81f11294d797a1da906415ab50991bcb93caea8c2a7657e2" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.607668 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.647852 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d389362-a217-4c05-9d80-ed31811768dc" (UID: "7d389362-a217-4c05-9d80-ed31811768dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.651354 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.651393 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.651408 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qvvm\" (UniqueName: \"kubernetes.io/projected/7d389362-a217-4c05-9d80-ed31811768dc-kube-api-access-6qvvm\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.839848 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v27xv"] Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.845289 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v27xv"] Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.890916 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsnl7"] Jan 24 07:06:26 crc kubenswrapper[4675]: I0124 07:06:26.210271 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:26 crc kubenswrapper[4675]: I0124 07:06:26.211275 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:26 crc kubenswrapper[4675]: I0124 07:06:26.246693 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:26 crc kubenswrapper[4675]: I0124 07:06:26.507912 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerStarted","Data":"3a6b939ba9b0f49f20dbbe8a4b746d440813cfcb7dc60015231946094cb0835d"} Jan 24 07:06:26 crc kubenswrapper[4675]: I0124 07:06:26.568893 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:27 crc kubenswrapper[4675]: I0124 07:06:27.000059 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d389362-a217-4c05-9d80-ed31811768dc" path="/var/lib/kubelet/pods/7d389362-a217-4c05-9d80-ed31811768dc/volumes" Jan 24 07:06:29 crc kubenswrapper[4675]: I0124 07:06:29.528810 4675 generic.go:334] "Generic (PLEG): container finished" podID="4941a74b-f8db-4960-a1d7-7585b2099620" containerID="7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6" exitCode=0 Jan 24 07:06:29 crc kubenswrapper[4675]: I0124 07:06:29.529153 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerDied","Data":"7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6"} Jan 24 07:06:30 crc kubenswrapper[4675]: I0124 07:06:30.274500 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvpcf"] Jan 24 07:06:30 crc kubenswrapper[4675]: I0124 07:06:30.275185 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jvpcf" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="registry-server" containerID="cri-o://df4378cbe34ce1ff1d9b23ad146643fab41006a05293fb890a58de7617cc3b7f" gracePeriod=2 Jan 24 07:06:30 crc kubenswrapper[4675]: I0124 07:06:30.548891 4675 generic.go:334] "Generic (PLEG): container finished" podID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerID="df4378cbe34ce1ff1d9b23ad146643fab41006a05293fb890a58de7617cc3b7f" exitCode=0 Jan 24 07:06:30 crc kubenswrapper[4675]: I0124 07:06:30.549031 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerDied","Data":"df4378cbe34ce1ff1d9b23ad146643fab41006a05293fb890a58de7617cc3b7f"} Jan 24 07:06:30 crc kubenswrapper[4675]: I0124 07:06:30.553113 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerStarted","Data":"659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a"} Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.122670 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.151714 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-catalog-content\") pod \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.151885 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4hpl\" (UniqueName: \"kubernetes.io/projected/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-kube-api-access-g4hpl\") pod \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.151937 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-utilities\") pod \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.152781 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-utilities" (OuterVolumeSpecName: "utilities") pod "34d5d6a5-fbe7-4f14-a530-8b78604a61a3" (UID: "34d5d6a5-fbe7-4f14-a530-8b78604a61a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.157325 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-kube-api-access-g4hpl" (OuterVolumeSpecName: "kube-api-access-g4hpl") pod "34d5d6a5-fbe7-4f14-a530-8b78604a61a3" (UID: "34d5d6a5-fbe7-4f14-a530-8b78604a61a3"). InnerVolumeSpecName "kube-api-access-g4hpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.253392 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4hpl\" (UniqueName: \"kubernetes.io/projected/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-kube-api-access-g4hpl\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.253426 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.267525 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34d5d6a5-fbe7-4f14-a530-8b78604a61a3" (UID: "34d5d6a5-fbe7-4f14-a530-8b78604a61a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.354889 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.561026 4675 generic.go:334] "Generic (PLEG): container finished" podID="4941a74b-f8db-4960-a1d7-7585b2099620" containerID="659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a" exitCode=0 Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.561080 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerDied","Data":"659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a"} Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.564265 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerDied","Data":"c409d86ef591245d15fd8a59452d9dc84cc21a8a336c9225e59ad1d8785554a8"} Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.564328 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.564337 4675 scope.go:117] "RemoveContainer" containerID="df4378cbe34ce1ff1d9b23ad146643fab41006a05293fb890a58de7617cc3b7f" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.596889 4675 scope.go:117] "RemoveContainer" containerID="edde818446df1a2eddfa227f57318e821f8f9185b605f45d799771d126c45a99" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.612032 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvpcf"] Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.615748 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jvpcf"] Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.636370 4675 scope.go:117] "RemoveContainer" containerID="1d98d86bba040151e8b910b0f27ae9bbdfed1abab02298643071d56d97f908e2" Jan 24 07:06:32 crc kubenswrapper[4675]: I0124 07:06:32.574542 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerStarted","Data":"b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f"} Jan 24 07:06:32 crc kubenswrapper[4675]: I0124 07:06:32.950008 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" path="/var/lib/kubelet/pods/34d5d6a5-fbe7-4f14-a530-8b78604a61a3/volumes" Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.515192 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-c64jl" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerName="console" containerID="cri-o://3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663" gracePeriod=15 Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.608924 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.608992 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.670332 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.698606 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qsnl7" podStartSLOduration=8.232331873 podStartE2EDuration="10.698585577s" podCreationTimestamp="2026-01-24 07:06:25 +0000 UTC" firstStartedPulling="2026-01-24 07:06:29.530904836 +0000 UTC m=+790.827010109" lastFinishedPulling="2026-01-24 07:06:31.9971586 +0000 UTC m=+793.293263813" observedRunningTime="2026-01-24 07:06:32.599948373 +0000 UTC m=+793.896053626" watchObservedRunningTime="2026-01-24 07:06:35.698585577 +0000 UTC m=+796.994690800" Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.942624 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c64jl_c66b0b0f-0581-49e6-bfa7-548678ab6de8/console/0.log" Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.944766 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020291 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-serving-cert\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020347 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-service-ca\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020404 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-oauth-config\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020426 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-config\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020443 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-trusted-ca-bundle\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020491 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-oauth-serving-cert\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020514 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95m25\" (UniqueName: \"kubernetes.io/projected/c66b0b0f-0581-49e6-bfa7-548678ab6de8-kube-api-access-95m25\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.027576 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.027755 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-config" (OuterVolumeSpecName: "console-config") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.028218 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.030531 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-service-ca" (OuterVolumeSpecName: "service-ca") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.038566 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.047354 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c66b0b0f-0581-49e6-bfa7-548678ab6de8-kube-api-access-95m25" (OuterVolumeSpecName: "kube-api-access-95m25") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "kube-api-access-95m25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.050149 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122489 4675 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122856 4675 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122872 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122884 4675 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122898 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95m25\" (UniqueName: \"kubernetes.io/projected/c66b0b0f-0581-49e6-bfa7-548678ab6de8-kube-api-access-95m25\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122910 4675 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122923 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.602039 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c64jl_c66b0b0f-0581-49e6-bfa7-548678ab6de8/console/0.log" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.602117 4675 generic.go:334] "Generic (PLEG): container finished" podID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerID="3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663" exitCode=2 Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.602169 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c64jl" event={"ID":"c66b0b0f-0581-49e6-bfa7-548678ab6de8","Type":"ContainerDied","Data":"3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663"} Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.602186 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.602237 4675 scope.go:117] "RemoveContainer" containerID="3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.602226 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c64jl" event={"ID":"c66b0b0f-0581-49e6-bfa7-548678ab6de8","Type":"ContainerDied","Data":"9f62761dfa0e23278a88b4c9d7acb6c23e771672906712e8cf7b32e35ec90e90"} Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.620530 4675 scope.go:117] "RemoveContainer" containerID="3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.620966 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663\": container with ID starting with 3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663 not found: ID does not exist" containerID="3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.621005 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663"} err="failed to get container status \"3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663\": rpc error: code = NotFound desc = could not find container \"3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663\": container with ID starting with 3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663 not found: ID does not exist" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.633778 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c64jl"] Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.639547 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-c64jl"] Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.938547 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc"] Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.938948 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="extract-content" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.938970 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="extract-content" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.938989 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="extract-utilities" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939004 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="extract-utilities" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.939026 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="extract-content" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939039 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="extract-content" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.939070 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="registry-server" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939088 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="registry-server" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.939117 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerName="console" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939133 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerName="console" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.939162 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="registry-server" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939178 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="registry-server" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.939197 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="extract-utilities" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939210 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="extract-utilities" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939423 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="registry-server" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939449 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerName="console" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939480 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="registry-server" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.940973 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.944119 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.950146 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" path="/var/lib/kubelet/pods/c66b0b0f-0581-49e6-bfa7-548678ab6de8/volumes" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.950834 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc"] Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.040361 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wzxt\" (UniqueName: \"kubernetes.io/projected/55a17869-4316-441a-ba35-dc9c1660b966-kube-api-access-5wzxt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.040452 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.040485 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.141890 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wzxt\" (UniqueName: \"kubernetes.io/projected/55a17869-4316-441a-ba35-dc9c1660b966-kube-api-access-5wzxt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.142003 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.142086 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.142461 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.142603 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.159490 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wzxt\" (UniqueName: \"kubernetes.io/projected/55a17869-4316-441a-ba35-dc9c1660b966-kube-api-access-5wzxt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.261672 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.449141 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc"] Jan 24 07:06:37 crc kubenswrapper[4675]: W0124 07:06:37.462960 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55a17869_4316_441a_ba35_dc9c1660b966.slice/crio-74ea7a4ad0c636e54a871784c598d6b8f67956cccf33cfa4081b0d1993ac96c0 WatchSource:0}: Error finding container 74ea7a4ad0c636e54a871784c598d6b8f67956cccf33cfa4081b0d1993ac96c0: Status 404 returned error can't find the container with id 74ea7a4ad0c636e54a871784c598d6b8f67956cccf33cfa4081b0d1993ac96c0 Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.609049 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" event={"ID":"55a17869-4316-441a-ba35-dc9c1660b966","Type":"ContainerStarted","Data":"74ea7a4ad0c636e54a871784c598d6b8f67956cccf33cfa4081b0d1993ac96c0"} Jan 24 07:06:38 crc kubenswrapper[4675]: I0124 07:06:38.620390 4675 generic.go:334] "Generic (PLEG): container finished" podID="55a17869-4316-441a-ba35-dc9c1660b966" containerID="fe323b02c4e6e39780da2f5799519f1b1a30682395decc4ac60a5930313e5ebf" exitCode=0 Jan 24 07:06:38 crc kubenswrapper[4675]: I0124 07:06:38.620802 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" event={"ID":"55a17869-4316-441a-ba35-dc9c1660b966","Type":"ContainerDied","Data":"fe323b02c4e6e39780da2f5799519f1b1a30682395decc4ac60a5930313e5ebf"} Jan 24 07:06:41 crc kubenswrapper[4675]: I0124 07:06:41.655056 4675 generic.go:334] "Generic (PLEG): container finished" podID="55a17869-4316-441a-ba35-dc9c1660b966" containerID="513678d78adb6113c117cac2ca6b3799e554a164b11057d772bdf64986771363" exitCode=0 Jan 24 07:06:41 crc kubenswrapper[4675]: I0124 07:06:41.655119 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" event={"ID":"55a17869-4316-441a-ba35-dc9c1660b966","Type":"ContainerDied","Data":"513678d78adb6113c117cac2ca6b3799e554a164b11057d772bdf64986771363"} Jan 24 07:06:42 crc kubenswrapper[4675]: I0124 07:06:42.662031 4675 generic.go:334] "Generic (PLEG): container finished" podID="55a17869-4316-441a-ba35-dc9c1660b966" containerID="9a1e6c728b0cd2692615eb66d6293f36556f6b94d0544479444ea681d67e49ff" exitCode=0 Jan 24 07:06:42 crc kubenswrapper[4675]: I0124 07:06:42.662079 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" event={"ID":"55a17869-4316-441a-ba35-dc9c1660b966","Type":"ContainerDied","Data":"9a1e6c728b0cd2692615eb66d6293f36556f6b94d0544479444ea681d67e49ff"} Jan 24 07:06:43 crc kubenswrapper[4675]: I0124 07:06:43.915840 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.035044 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-util\") pod \"55a17869-4316-441a-ba35-dc9c1660b966\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.035155 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wzxt\" (UniqueName: \"kubernetes.io/projected/55a17869-4316-441a-ba35-dc9c1660b966-kube-api-access-5wzxt\") pod \"55a17869-4316-441a-ba35-dc9c1660b966\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.035226 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-bundle\") pod \"55a17869-4316-441a-ba35-dc9c1660b966\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.036443 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-bundle" (OuterVolumeSpecName: "bundle") pod "55a17869-4316-441a-ba35-dc9c1660b966" (UID: "55a17869-4316-441a-ba35-dc9c1660b966"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.042078 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a17869-4316-441a-ba35-dc9c1660b966-kube-api-access-5wzxt" (OuterVolumeSpecName: "kube-api-access-5wzxt") pod "55a17869-4316-441a-ba35-dc9c1660b966" (UID: "55a17869-4316-441a-ba35-dc9c1660b966"). InnerVolumeSpecName "kube-api-access-5wzxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.047232 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-util" (OuterVolumeSpecName: "util") pod "55a17869-4316-441a-ba35-dc9c1660b966" (UID: "55a17869-4316-441a-ba35-dc9c1660b966"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.136893 4675 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-util\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.136959 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wzxt\" (UniqueName: \"kubernetes.io/projected/55a17869-4316-441a-ba35-dc9c1660b966-kube-api-access-5wzxt\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.136991 4675 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.697123 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" event={"ID":"55a17869-4316-441a-ba35-dc9c1660b966","Type":"ContainerDied","Data":"74ea7a4ad0c636e54a871784c598d6b8f67956cccf33cfa4081b0d1993ac96c0"} Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.697425 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74ea7a4ad0c636e54a871784c598d6b8f67956cccf33cfa4081b0d1993ac96c0" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.697268 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.290136 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r2zwn"] Jan 24 07:06:45 crc kubenswrapper[4675]: E0124 07:06:45.290448 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="pull" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.290467 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="pull" Jan 24 07:06:45 crc kubenswrapper[4675]: E0124 07:06:45.290491 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="extract" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.290504 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="extract" Jan 24 07:06:45 crc kubenswrapper[4675]: E0124 07:06:45.290539 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="util" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.290551 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="util" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.290712 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="extract" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.292834 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.299890 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r2zwn"] Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.351132 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-catalog-content\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.351257 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2mj6\" (UniqueName: \"kubernetes.io/projected/6b44a7e8-d142-46b0-94fd-a0635212218a-kube-api-access-r2mj6\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.351284 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-utilities\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.452245 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2mj6\" (UniqueName: \"kubernetes.io/projected/6b44a7e8-d142-46b0-94fd-a0635212218a-kube-api-access-r2mj6\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.452293 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-utilities\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.452344 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-catalog-content\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.452869 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-catalog-content\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.453164 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-utilities\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.472933 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2mj6\" (UniqueName: \"kubernetes.io/projected/6b44a7e8-d142-46b0-94fd-a0635212218a-kube-api-access-r2mj6\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.635277 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.686849 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.992144 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r2zwn"] Jan 24 07:06:46 crc kubenswrapper[4675]: I0124 07:06:46.712832 4675 generic.go:334] "Generic (PLEG): container finished" podID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerID="0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492" exitCode=0 Jan 24 07:06:46 crc kubenswrapper[4675]: I0124 07:06:46.712948 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerDied","Data":"0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492"} Jan 24 07:06:46 crc kubenswrapper[4675]: I0124 07:06:46.713203 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerStarted","Data":"50d2143d6a15ce1effec7497d78dc551e5d67a35354b8365b053d80281a18399"} Jan 24 07:06:47 crc kubenswrapper[4675]: I0124 07:06:47.720860 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerStarted","Data":"8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786"} Jan 24 07:06:48 crc kubenswrapper[4675]: I0124 07:06:48.728869 4675 generic.go:334] "Generic (PLEG): container finished" podID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerID="8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786" exitCode=0 Jan 24 07:06:48 crc kubenswrapper[4675]: I0124 07:06:48.728964 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerDied","Data":"8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786"} Jan 24 07:06:49 crc kubenswrapper[4675]: I0124 07:06:49.736609 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerStarted","Data":"c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e"} Jan 24 07:06:49 crc kubenswrapper[4675]: I0124 07:06:49.756301 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r2zwn" podStartSLOduration=2.033364793 podStartE2EDuration="4.756279907s" podCreationTimestamp="2026-01-24 07:06:45 +0000 UTC" firstStartedPulling="2026-01-24 07:06:46.714963654 +0000 UTC m=+808.011068917" lastFinishedPulling="2026-01-24 07:06:49.437878818 +0000 UTC m=+810.733984031" observedRunningTime="2026-01-24 07:06:49.755265773 +0000 UTC m=+811.051370996" watchObservedRunningTime="2026-01-24 07:06:49.756279907 +0000 UTC m=+811.052385130" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.271577 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsnl7"] Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.271801 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qsnl7" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="registry-server" containerID="cri-o://b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f" gracePeriod=2 Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.659617 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.718457 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wpk8\" (UniqueName: \"kubernetes.io/projected/4941a74b-f8db-4960-a1d7-7585b2099620-kube-api-access-5wpk8\") pod \"4941a74b-f8db-4960-a1d7-7585b2099620\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.718549 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-utilities\") pod \"4941a74b-f8db-4960-a1d7-7585b2099620\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.718614 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-catalog-content\") pod \"4941a74b-f8db-4960-a1d7-7585b2099620\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.727432 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-utilities" (OuterVolumeSpecName: "utilities") pod "4941a74b-f8db-4960-a1d7-7585b2099620" (UID: "4941a74b-f8db-4960-a1d7-7585b2099620"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.736913 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4941a74b-f8db-4960-a1d7-7585b2099620-kube-api-access-5wpk8" (OuterVolumeSpecName: "kube-api-access-5wpk8") pod "4941a74b-f8db-4960-a1d7-7585b2099620" (UID: "4941a74b-f8db-4960-a1d7-7585b2099620"). InnerVolumeSpecName "kube-api-access-5wpk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.750092 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4941a74b-f8db-4960-a1d7-7585b2099620" (UID: "4941a74b-f8db-4960-a1d7-7585b2099620"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.754497 4675 generic.go:334] "Generic (PLEG): container finished" podID="4941a74b-f8db-4960-a1d7-7585b2099620" containerID="b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f" exitCode=0 Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.755433 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.755963 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerDied","Data":"b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f"} Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.756021 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerDied","Data":"3a6b939ba9b0f49f20dbbe8a4b746d440813cfcb7dc60015231946094cb0835d"} Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.756044 4675 scope.go:117] "RemoveContainer" containerID="b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.771564 4675 scope.go:117] "RemoveContainer" containerID="659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.794331 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsnl7"] Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.797432 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsnl7"] Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.800245 4675 scope.go:117] "RemoveContainer" containerID="7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.824832 4675 scope.go:117] "RemoveContainer" containerID="b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.825433 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.825468 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wpk8\" (UniqueName: \"kubernetes.io/projected/4941a74b-f8db-4960-a1d7-7585b2099620-kube-api-access-5wpk8\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.825479 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:50 crc kubenswrapper[4675]: E0124 07:06:50.829649 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f\": container with ID starting with b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f not found: ID does not exist" containerID="b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.829712 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f"} err="failed to get container status \"b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f\": rpc error: code = NotFound desc = could not find container \"b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f\": container with ID starting with b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f not found: ID does not exist" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.829751 4675 scope.go:117] "RemoveContainer" containerID="659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a" Jan 24 07:06:50 crc kubenswrapper[4675]: E0124 07:06:50.830059 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a\": container with ID starting with 659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a not found: ID does not exist" containerID="659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.830076 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a"} err="failed to get container status \"659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a\": rpc error: code = NotFound desc = could not find container \"659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a\": container with ID starting with 659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a not found: ID does not exist" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.830090 4675 scope.go:117] "RemoveContainer" containerID="7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6" Jan 24 07:06:50 crc kubenswrapper[4675]: E0124 07:06:50.830279 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6\": container with ID starting with 7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6 not found: ID does not exist" containerID="7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.830296 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6"} err="failed to get container status \"7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6\": rpc error: code = NotFound desc = could not find container \"7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6\": container with ID starting with 7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6 not found: ID does not exist" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.949133 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" path="/var/lib/kubelet/pods/4941a74b-f8db-4960-a1d7-7585b2099620/volumes" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.833911 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v"] Jan 24 07:06:52 crc kubenswrapper[4675]: E0124 07:06:52.836821 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="extract-utilities" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.838470 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="extract-utilities" Jan 24 07:06:52 crc kubenswrapper[4675]: E0124 07:06:52.838575 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="extract-content" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.838633 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="extract-content" Jan 24 07:06:52 crc kubenswrapper[4675]: E0124 07:06:52.838704 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="registry-server" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.838785 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="registry-server" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.839030 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="registry-server" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.839785 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.848052 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.848140 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.848320 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-t8rpx" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.848363 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.848471 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.854432 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v"] Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.954940 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvd7f\" (UniqueName: \"kubernetes.io/projected/0cf0ee32-c416-4629-a441-268fbe054062-kube-api-access-fvd7f\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.954984 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cf0ee32-c416-4629-a441-268fbe054062-webhook-cert\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.955037 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cf0ee32-c416-4629-a441-268fbe054062-apiservice-cert\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.059807 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc"] Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.060444 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.061666 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvd7f\" (UniqueName: \"kubernetes.io/projected/0cf0ee32-c416-4629-a441-268fbe054062-kube-api-access-fvd7f\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.061701 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cf0ee32-c416-4629-a441-268fbe054062-webhook-cert\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.061765 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cf0ee32-c416-4629-a441-268fbe054062-apiservice-cert\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.062697 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.065815 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.066680 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ww4hr" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.066890 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cf0ee32-c416-4629-a441-268fbe054062-webhook-cert\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.066939 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cf0ee32-c416-4629-a441-268fbe054062-apiservice-cert\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.116537 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvd7f\" (UniqueName: \"kubernetes.io/projected/0cf0ee32-c416-4629-a441-268fbe054062-kube-api-access-fvd7f\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.148250 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc"] Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.153534 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.164559 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/893cbc8e-86ae-4910-8693-061301da0ba6-webhook-cert\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.164663 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/893cbc8e-86ae-4910-8693-061301da0ba6-apiservice-cert\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.164689 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc9pr\" (UniqueName: \"kubernetes.io/projected/893cbc8e-86ae-4910-8693-061301da0ba6-kube-api-access-cc9pr\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.266187 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/893cbc8e-86ae-4910-8693-061301da0ba6-webhook-cert\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.266232 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/893cbc8e-86ae-4910-8693-061301da0ba6-apiservice-cert\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.266263 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc9pr\" (UniqueName: \"kubernetes.io/projected/893cbc8e-86ae-4910-8693-061301da0ba6-kube-api-access-cc9pr\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.270622 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/893cbc8e-86ae-4910-8693-061301da0ba6-webhook-cert\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.288270 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc9pr\" (UniqueName: \"kubernetes.io/projected/893cbc8e-86ae-4910-8693-061301da0ba6-kube-api-access-cc9pr\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.290176 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/893cbc8e-86ae-4910-8693-061301da0ba6-apiservice-cert\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.405270 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.443140 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v"] Jan 24 07:06:53 crc kubenswrapper[4675]: W0124 07:06:53.456856 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cf0ee32_c416_4629_a441_268fbe054062.slice/crio-a5dc44614e4c757b61b3b91b0f18b7ece1fb5b2b847465523adb05a3800d9bab WatchSource:0}: Error finding container a5dc44614e4c757b61b3b91b0f18b7ece1fb5b2b847465523adb05a3800d9bab: Status 404 returned error can't find the container with id a5dc44614e4c757b61b3b91b0f18b7ece1fb5b2b847465523adb05a3800d9bab Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.668525 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc"] Jan 24 07:06:53 crc kubenswrapper[4675]: W0124 07:06:53.676937 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod893cbc8e_86ae_4910_8693_061301da0ba6.slice/crio-ffa913848951fab816606a52607b4008615af0e1c44aab8be4007d3609ce72ac WatchSource:0}: Error finding container ffa913848951fab816606a52607b4008615af0e1c44aab8be4007d3609ce72ac: Status 404 returned error can't find the container with id ffa913848951fab816606a52607b4008615af0e1c44aab8be4007d3609ce72ac Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.770197 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" event={"ID":"893cbc8e-86ae-4910-8693-061301da0ba6","Type":"ContainerStarted","Data":"ffa913848951fab816606a52607b4008615af0e1c44aab8be4007d3609ce72ac"} Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.771058 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" event={"ID":"0cf0ee32-c416-4629-a441-268fbe054062","Type":"ContainerStarted","Data":"a5dc44614e4c757b61b3b91b0f18b7ece1fb5b2b847465523adb05a3800d9bab"} Jan 24 07:06:55 crc kubenswrapper[4675]: I0124 07:06:55.636890 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:55 crc kubenswrapper[4675]: I0124 07:06:55.637241 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:55 crc kubenswrapper[4675]: I0124 07:06:55.679606 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:55 crc kubenswrapper[4675]: I0124 07:06:55.822948 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:58 crc kubenswrapper[4675]: I0124 07:06:58.875114 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r2zwn"] Jan 24 07:06:58 crc kubenswrapper[4675]: I0124 07:06:58.875691 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r2zwn" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="registry-server" containerID="cri-o://c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e" gracePeriod=2 Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.502332 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.542556 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2mj6\" (UniqueName: \"kubernetes.io/projected/6b44a7e8-d142-46b0-94fd-a0635212218a-kube-api-access-r2mj6\") pod \"6b44a7e8-d142-46b0-94fd-a0635212218a\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.542638 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-catalog-content\") pod \"6b44a7e8-d142-46b0-94fd-a0635212218a\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.542681 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-utilities\") pod \"6b44a7e8-d142-46b0-94fd-a0635212218a\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.543543 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-utilities" (OuterVolumeSpecName: "utilities") pod "6b44a7e8-d142-46b0-94fd-a0635212218a" (UID: "6b44a7e8-d142-46b0-94fd-a0635212218a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.556019 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b44a7e8-d142-46b0-94fd-a0635212218a-kube-api-access-r2mj6" (OuterVolumeSpecName: "kube-api-access-r2mj6") pod "6b44a7e8-d142-46b0-94fd-a0635212218a" (UID: "6b44a7e8-d142-46b0-94fd-a0635212218a"). InnerVolumeSpecName "kube-api-access-r2mj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.614256 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b44a7e8-d142-46b0-94fd-a0635212218a" (UID: "6b44a7e8-d142-46b0-94fd-a0635212218a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.644942 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.644969 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.644979 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2mj6\" (UniqueName: \"kubernetes.io/projected/6b44a7e8-d142-46b0-94fd-a0635212218a-kube-api-access-r2mj6\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.816176 4675 generic.go:334] "Generic (PLEG): container finished" podID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerID="c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e" exitCode=0 Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.816254 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerDied","Data":"c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e"} Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.816279 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerDied","Data":"50d2143d6a15ce1effec7497d78dc551e5d67a35354b8365b053d80281a18399"} Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.816295 4675 scope.go:117] "RemoveContainer" containerID="c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.816459 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.823362 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" event={"ID":"893cbc8e-86ae-4910-8693-061301da0ba6","Type":"ContainerStarted","Data":"638dc5f054ef699396f1519ba591ea3455defd8fd3829bad716febfaa7b48cfb"} Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.823996 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.829319 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" event={"ID":"0cf0ee32-c416-4629-a441-268fbe054062","Type":"ContainerStarted","Data":"7ceefb2707b7849f8f522ef3b2d6e5cfc2a7824c7ccb4fc66f76b49f8b249d88"} Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.829764 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.835410 4675 scope.go:117] "RemoveContainer" containerID="8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.848275 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" podStartSLOduration=0.943908259 podStartE2EDuration="6.848258689s" podCreationTimestamp="2026-01-24 07:06:53 +0000 UTC" firstStartedPulling="2026-01-24 07:06:53.683875597 +0000 UTC m=+814.979980820" lastFinishedPulling="2026-01-24 07:06:59.588226027 +0000 UTC m=+820.884331250" observedRunningTime="2026-01-24 07:06:59.846139368 +0000 UTC m=+821.142244591" watchObservedRunningTime="2026-01-24 07:06:59.848258689 +0000 UTC m=+821.144363912" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.865335 4675 scope.go:117] "RemoveContainer" containerID="0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.888634 4675 scope.go:117] "RemoveContainer" containerID="c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e" Jan 24 07:06:59 crc kubenswrapper[4675]: E0124 07:06:59.890643 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e\": container with ID starting with c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e not found: ID does not exist" containerID="c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.890899 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e"} err="failed to get container status \"c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e\": rpc error: code = NotFound desc = could not find container \"c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e\": container with ID starting with c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e not found: ID does not exist" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.890937 4675 scope.go:117] "RemoveContainer" containerID="8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.894695 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" podStartSLOduration=1.785445368 podStartE2EDuration="7.89467377s" podCreationTimestamp="2026-01-24 07:06:52 +0000 UTC" firstStartedPulling="2026-01-24 07:06:53.459531189 +0000 UTC m=+814.755636412" lastFinishedPulling="2026-01-24 07:06:59.568759591 +0000 UTC m=+820.864864814" observedRunningTime="2026-01-24 07:06:59.880626214 +0000 UTC m=+821.176731437" watchObservedRunningTime="2026-01-24 07:06:59.89467377 +0000 UTC m=+821.190778993" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.894989 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r2zwn"] Jan 24 07:06:59 crc kubenswrapper[4675]: E0124 07:06:59.895943 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786\": container with ID starting with 8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786 not found: ID does not exist" containerID="8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.895991 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786"} err="failed to get container status \"8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786\": rpc error: code = NotFound desc = could not find container \"8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786\": container with ID starting with 8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786 not found: ID does not exist" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.896013 4675 scope.go:117] "RemoveContainer" containerID="0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492" Jan 24 07:06:59 crc kubenswrapper[4675]: E0124 07:06:59.896648 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492\": container with ID starting with 0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492 not found: ID does not exist" containerID="0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.896689 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492"} err="failed to get container status \"0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492\": rpc error: code = NotFound desc = could not find container \"0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492\": container with ID starting with 0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492 not found: ID does not exist" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.905626 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r2zwn"] Jan 24 07:07:00 crc kubenswrapper[4675]: I0124 07:07:00.948524 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" path="/var/lib/kubelet/pods/6b44a7e8-d142-46b0-94fd-a0635212218a/volumes" Jan 24 07:07:13 crc kubenswrapper[4675]: I0124 07:07:13.410504 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:07:33 crc kubenswrapper[4675]: I0124 07:07:33.158266 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.028094 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-78f4w"] Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.028311 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="extract-content" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.028323 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="extract-content" Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.028339 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="extract-utilities" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.028345 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="extract-utilities" Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.028355 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="registry-server" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.028361 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="registry-server" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.028471 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="registry-server" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.030156 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.038493 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.038493 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.038567 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2fwq2" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.042856 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.043518 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.045596 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097062 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics-certs\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097344 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-reloader\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097370 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032ac1eb-bb7f-4f94-b9ad-4d710032f3af-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-skd24\" (UID: \"032ac1eb-bb7f-4f94-b9ad-4d710032f3af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097384 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vdmj\" (UniqueName: \"kubernetes.io/projected/032ac1eb-bb7f-4f94-b9ad-4d710032f3af-kube-api-access-7vdmj\") pod \"frr-k8s-webhook-server-7df86c4f6c-skd24\" (UID: \"032ac1eb-bb7f-4f94-b9ad-4d710032f3af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097458 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-startup\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097489 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-conf\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097503 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nxht\" (UniqueName: \"kubernetes.io/projected/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-kube-api-access-4nxht\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097517 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-sockets\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097542 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.101328 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.164567 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-5bpc7"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.165373 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.170657 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-4r4bl" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.170888 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.170948 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.171011 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.199450 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-startup\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.199777 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-conf\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.199916 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nxht\" (UniqueName: \"kubernetes.io/projected/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-kube-api-access-4nxht\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.200062 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-sockets\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.200183 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.200313 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics-certs\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.200414 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-reloader\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.200519 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032ac1eb-bb7f-4f94-b9ad-4d710032f3af-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-skd24\" (UID: \"032ac1eb-bb7f-4f94-b9ad-4d710032f3af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.200626 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vdmj\" (UniqueName: \"kubernetes.io/projected/032ac1eb-bb7f-4f94-b9ad-4d710032f3af-kube-api-access-7vdmj\") pod \"frr-k8s-webhook-server-7df86c4f6c-skd24\" (UID: \"032ac1eb-bb7f-4f94-b9ad-4d710032f3af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.201461 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-sockets\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.201827 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.202003 4675 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.204530 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics-certs podName:fa6ce697-eaf1-4412-a7ca-40a3eb3fa712 nodeName:}" failed. No retries permitted until 2026-01-24 07:07:34.704511064 +0000 UTC m=+856.000616287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics-certs") pod "frr-k8s-78f4w" (UID: "fa6ce697-eaf1-4412-a7ca-40a3eb3fa712") : secret "frr-k8s-certs-secret" not found Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.202669 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-conf\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.202859 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-reloader\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.202494 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-startup\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.220800 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-c4k6t"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.226320 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nxht\" (UniqueName: \"kubernetes.io/projected/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-kube-api-access-4nxht\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.238965 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.241161 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vdmj\" (UniqueName: \"kubernetes.io/projected/032ac1eb-bb7f-4f94-b9ad-4d710032f3af-kube-api-access-7vdmj\") pod \"frr-k8s-webhook-server-7df86c4f6c-skd24\" (UID: \"032ac1eb-bb7f-4f94-b9ad-4d710032f3af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.260044 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.270825 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032ac1eb-bb7f-4f94-b9ad-4d710032f3af-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-skd24\" (UID: \"032ac1eb-bb7f-4f94-b9ad-4d710032f3af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.272796 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-c4k6t"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.301797 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metallb-excludel2\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.301988 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdnnz\" (UniqueName: \"kubernetes.io/projected/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-kube-api-access-rdnnz\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.302015 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metrics-certs\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.302060 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.402925 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403150 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqntg\" (UniqueName: \"kubernetes.io/projected/af8e6625-69ed-4901-9577-65cc6fafe0d1-kube-api-access-fqntg\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403337 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdnnz\" (UniqueName: \"kubernetes.io/projected/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-kube-api-access-rdnnz\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403377 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metrics-certs\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403397 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403448 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af8e6625-69ed-4901-9577-65cc6fafe0d1-metrics-certs\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403477 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metallb-excludel2\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403501 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af8e6625-69ed-4901-9577-65cc6fafe0d1-cert\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.404462 4675 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.404593 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist podName:21ad12ca-5157-4c19-9e8c-34fbe8fa9b96 nodeName:}" failed. No retries permitted until 2026-01-24 07:07:34.904571995 +0000 UTC m=+856.200677298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist") pod "speaker-5bpc7" (UID: "21ad12ca-5157-4c19-9e8c-34fbe8fa9b96") : secret "metallb-memberlist" not found Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.404530 4675 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.404795 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metrics-certs podName:21ad12ca-5157-4c19-9e8c-34fbe8fa9b96 nodeName:}" failed. No retries permitted until 2026-01-24 07:07:34.90478452 +0000 UTC m=+856.200889793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metrics-certs") pod "speaker-5bpc7" (UID: "21ad12ca-5157-4c19-9e8c-34fbe8fa9b96") : secret "speaker-certs-secret" not found Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.405132 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metallb-excludel2\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.439312 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdnnz\" (UniqueName: \"kubernetes.io/projected/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-kube-api-access-rdnnz\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.506302 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af8e6625-69ed-4901-9577-65cc6fafe0d1-metrics-certs\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.506625 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af8e6625-69ed-4901-9577-65cc6fafe0d1-cert\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.506678 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqntg\" (UniqueName: \"kubernetes.io/projected/af8e6625-69ed-4901-9577-65cc6fafe0d1-kube-api-access-fqntg\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.509112 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.511016 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af8e6625-69ed-4901-9577-65cc6fafe0d1-metrics-certs\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.520535 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af8e6625-69ed-4901-9577-65cc6fafe0d1-cert\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.522810 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqntg\" (UniqueName: \"kubernetes.io/projected/af8e6625-69ed-4901-9577-65cc6fafe0d1-kube-api-access-fqntg\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.583351 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.592467 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.708225 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics-certs\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.713935 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics-certs\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.780474 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-c4k6t"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.911088 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metrics-certs\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.911251 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.911412 4675 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.911464 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist podName:21ad12ca-5157-4c19-9e8c-34fbe8fa9b96 nodeName:}" failed. No retries permitted until 2026-01-24 07:07:35.911448138 +0000 UTC m=+857.207553361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist") pod "speaker-5bpc7" (UID: "21ad12ca-5157-4c19-9e8c-34fbe8fa9b96") : secret "metallb-memberlist" not found Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.916175 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metrics-certs\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.956010 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:35 crc kubenswrapper[4675]: I0124 07:07:35.014444 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" event={"ID":"032ac1eb-bb7f-4f94-b9ad-4d710032f3af","Type":"ContainerStarted","Data":"94e750f0ae6ac9183645c58f6564c0ea448569d3238fb8d39e9d25945e7d1eea"} Jan 24 07:07:35 crc kubenswrapper[4675]: I0124 07:07:35.015659 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-c4k6t" event={"ID":"af8e6625-69ed-4901-9577-65cc6fafe0d1","Type":"ContainerStarted","Data":"859429b026aa754fd934437ae19b3294ff2811182303dc413e7149dd6cc66f83"} Jan 24 07:07:35 crc kubenswrapper[4675]: I0124 07:07:35.921548 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:35 crc kubenswrapper[4675]: I0124 07:07:35.944451 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:36 crc kubenswrapper[4675]: I0124 07:07:36.001283 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5bpc7" Jan 24 07:07:36 crc kubenswrapper[4675]: I0124 07:07:36.023855 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"109edcdaadd0f45cbe2a7b641e11bb2f039e60332933aeb9788df1103d845e80"} Jan 24 07:07:36 crc kubenswrapper[4675]: I0124 07:07:36.027618 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-c4k6t" event={"ID":"af8e6625-69ed-4901-9577-65cc6fafe0d1","Type":"ContainerStarted","Data":"bb953cbb3571d7fa20935f0c4d699e092a6f104f7399328ac2ed6f5cffde3044"} Jan 24 07:07:36 crc kubenswrapper[4675]: I0124 07:07:36.027643 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-c4k6t" event={"ID":"af8e6625-69ed-4901-9577-65cc6fafe0d1","Type":"ContainerStarted","Data":"34095ca9ec088ddeb46c70be8ddb7e066479273835bd505a0f9173566f85e598"} Jan 24 07:07:36 crc kubenswrapper[4675]: I0124 07:07:36.028059 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:36 crc kubenswrapper[4675]: W0124 07:07:36.035278 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21ad12ca_5157_4c19_9e8c_34fbe8fa9b96.slice/crio-02e6994e7e67b63dc14eaaf10e48ef1422940589c43fc732acaf6647d51c2ea8 WatchSource:0}: Error finding container 02e6994e7e67b63dc14eaaf10e48ef1422940589c43fc732acaf6647d51c2ea8: Status 404 returned error can't find the container with id 02e6994e7e67b63dc14eaaf10e48ef1422940589c43fc732acaf6647d51c2ea8 Jan 24 07:07:36 crc kubenswrapper[4675]: I0124 07:07:36.049913 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-c4k6t" podStartSLOduration=2.04989579 podStartE2EDuration="2.04989579s" podCreationTimestamp="2026-01-24 07:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:07:36.049302196 +0000 UTC m=+857.345407449" watchObservedRunningTime="2026-01-24 07:07:36.04989579 +0000 UTC m=+857.346001013" Jan 24 07:07:37 crc kubenswrapper[4675]: I0124 07:07:37.043843 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5bpc7" event={"ID":"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96","Type":"ContainerStarted","Data":"5b364002e9ca948df1c59004b1639fd758859def43c15bb386297b5e3f2a996f"} Jan 24 07:07:37 crc kubenswrapper[4675]: I0124 07:07:37.043906 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5bpc7" event={"ID":"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96","Type":"ContainerStarted","Data":"55c10003d6d8bcae0786761c23428ce104ebec925a5c8d52fec1d952347f5e79"} Jan 24 07:07:37 crc kubenswrapper[4675]: I0124 07:07:37.043923 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5bpc7" event={"ID":"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96","Type":"ContainerStarted","Data":"02e6994e7e67b63dc14eaaf10e48ef1422940589c43fc732acaf6647d51c2ea8"} Jan 24 07:07:37 crc kubenswrapper[4675]: I0124 07:07:37.044109 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5bpc7" Jan 24 07:07:37 crc kubenswrapper[4675]: I0124 07:07:37.065424 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-5bpc7" podStartSLOduration=3.065407891 podStartE2EDuration="3.065407891s" podCreationTimestamp="2026-01-24 07:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:07:37.063779391 +0000 UTC m=+858.359884614" watchObservedRunningTime="2026-01-24 07:07:37.065407891 +0000 UTC m=+858.361513114" Jan 24 07:07:43 crc kubenswrapper[4675]: I0124 07:07:43.089535 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" event={"ID":"032ac1eb-bb7f-4f94-b9ad-4d710032f3af","Type":"ContainerStarted","Data":"7c819efc4f465b4a5612228aff471cdf1de4b4b3668452765fee0de53b7202d0"} Jan 24 07:07:43 crc kubenswrapper[4675]: I0124 07:07:43.090052 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:43 crc kubenswrapper[4675]: I0124 07:07:43.091681 4675 generic.go:334] "Generic (PLEG): container finished" podID="fa6ce697-eaf1-4412-a7ca-40a3eb3fa712" containerID="4781b14d9af67c3c9982c74fcf04c706d2e84d50ebdaa1a49f08ee784de4c9ee" exitCode=0 Jan 24 07:07:43 crc kubenswrapper[4675]: I0124 07:07:43.091764 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerDied","Data":"4781b14d9af67c3c9982c74fcf04c706d2e84d50ebdaa1a49f08ee784de4c9ee"} Jan 24 07:07:43 crc kubenswrapper[4675]: I0124 07:07:43.106314 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" podStartSLOduration=0.911545931 podStartE2EDuration="9.106293881s" podCreationTimestamp="2026-01-24 07:07:34 +0000 UTC" firstStartedPulling="2026-01-24 07:07:34.6073201 +0000 UTC m=+855.903425323" lastFinishedPulling="2026-01-24 07:07:42.80206805 +0000 UTC m=+864.098173273" observedRunningTime="2026-01-24 07:07:43.10378953 +0000 UTC m=+864.399894753" watchObservedRunningTime="2026-01-24 07:07:43.106293881 +0000 UTC m=+864.402399104" Jan 24 07:07:44 crc kubenswrapper[4675]: I0124 07:07:44.096921 4675 generic.go:334] "Generic (PLEG): container finished" podID="fa6ce697-eaf1-4412-a7ca-40a3eb3fa712" containerID="46240cca07b659f78a621d75df937a4dc1e26462af437c89664affcaccb6f475" exitCode=0 Jan 24 07:07:44 crc kubenswrapper[4675]: I0124 07:07:44.097022 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerDied","Data":"46240cca07b659f78a621d75df937a4dc1e26462af437c89664affcaccb6f475"} Jan 24 07:07:45 crc kubenswrapper[4675]: I0124 07:07:45.107528 4675 generic.go:334] "Generic (PLEG): container finished" podID="fa6ce697-eaf1-4412-a7ca-40a3eb3fa712" containerID="faf4eefce35793e912ce4bccc29429eefc37468c976a2605c95dc2abe15c1876" exitCode=0 Jan 24 07:07:45 crc kubenswrapper[4675]: I0124 07:07:45.107579 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerDied","Data":"faf4eefce35793e912ce4bccc29429eefc37468c976a2605c95dc2abe15c1876"} Jan 24 07:07:46 crc kubenswrapper[4675]: I0124 07:07:46.019272 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5bpc7" Jan 24 07:07:46 crc kubenswrapper[4675]: I0124 07:07:46.118767 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"edde50c157a4941b24dc7f1026ab713c5763b8fd9fb27b5bb6af374e73e46792"} Jan 24 07:07:46 crc kubenswrapper[4675]: I0124 07:07:46.118813 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"c3705ce7d4e8946bed2f0e7864dd24b868f8b40871b481425e99de323739964b"} Jan 24 07:07:46 crc kubenswrapper[4675]: I0124 07:07:46.118826 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"cd55267f82d6d766124fd4c2c0b3e9263d6c540a69e99a2d2202d3cf527e87e9"} Jan 24 07:07:46 crc kubenswrapper[4675]: I0124 07:07:46.118837 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"4b935eb6d368ca0ddc907282f9ca4a4064cc425eb85524db469f8f280af7406d"} Jan 24 07:07:46 crc kubenswrapper[4675]: I0124 07:07:46.118849 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"28b2ee528d24d172f73674e78a5096babe54dc85c899e425d5f7a6ac811fa758"} Jan 24 07:07:47 crc kubenswrapper[4675]: I0124 07:07:47.126784 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"38e96173b01ba929ed5758ad5d421b8deda1a6baf8fe8778bc1ee19647c1de55"} Jan 24 07:07:47 crc kubenswrapper[4675]: I0124 07:07:47.127039 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:47 crc kubenswrapper[4675]: I0124 07:07:47.151268 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-78f4w" podStartSLOduration=7.0629863 podStartE2EDuration="14.151251618s" podCreationTimestamp="2026-01-24 07:07:33 +0000 UTC" firstStartedPulling="2026-01-24 07:07:35.73156076 +0000 UTC m=+857.027666023" lastFinishedPulling="2026-01-24 07:07:42.819826118 +0000 UTC m=+864.115931341" observedRunningTime="2026-01-24 07:07:47.147568869 +0000 UTC m=+868.443674082" watchObservedRunningTime="2026-01-24 07:07:47.151251618 +0000 UTC m=+868.447356841" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.432891 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vd22g"] Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.433587 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.436189 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-js4nb" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.436205 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.448004 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.448948 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vd22g"] Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.502014 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx6zz\" (UniqueName: \"kubernetes.io/projected/d5597e1b-5874-4483-bf56-679470f1a288-kube-api-access-cx6zz\") pod \"openstack-operator-index-vd22g\" (UID: \"d5597e1b-5874-4483-bf56-679470f1a288\") " pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.603382 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx6zz\" (UniqueName: \"kubernetes.io/projected/d5597e1b-5874-4483-bf56-679470f1a288-kube-api-access-cx6zz\") pod \"openstack-operator-index-vd22g\" (UID: \"d5597e1b-5874-4483-bf56-679470f1a288\") " pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.629931 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx6zz\" (UniqueName: \"kubernetes.io/projected/d5597e1b-5874-4483-bf56-679470f1a288-kube-api-access-cx6zz\") pod \"openstack-operator-index-vd22g\" (UID: \"d5597e1b-5874-4483-bf56-679470f1a288\") " pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.765513 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.957031 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.968604 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vd22g"] Jan 24 07:07:50 crc kubenswrapper[4675]: I0124 07:07:50.024277 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:50 crc kubenswrapper[4675]: I0124 07:07:50.145883 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vd22g" event={"ID":"d5597e1b-5874-4483-bf56-679470f1a288","Type":"ContainerStarted","Data":"65792c9a12bd755b36506910452a8b4e058f9cc6c39651f649e047267f59c4e5"} Jan 24 07:07:52 crc kubenswrapper[4675]: I0124 07:07:52.820966 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vd22g"] Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.165232 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vd22g" event={"ID":"d5597e1b-5874-4483-bf56-679470f1a288","Type":"ContainerStarted","Data":"b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187"} Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.182887 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vd22g" podStartSLOduration=1.8336292680000001 podStartE2EDuration="4.182869485s" podCreationTimestamp="2026-01-24 07:07:49 +0000 UTC" firstStartedPulling="2026-01-24 07:07:50.00037676 +0000 UTC m=+871.296481993" lastFinishedPulling="2026-01-24 07:07:52.349616967 +0000 UTC m=+873.645722210" observedRunningTime="2026-01-24 07:07:53.181992993 +0000 UTC m=+874.478098236" watchObservedRunningTime="2026-01-24 07:07:53.182869485 +0000 UTC m=+874.478974698" Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.421056 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-d4hsh"] Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.422157 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.436997 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d4hsh"] Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.452901 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kwkd\" (UniqueName: \"kubernetes.io/projected/954076ba-3e6f-4e5b-9b3f-4637840d5021-kube-api-access-4kwkd\") pod \"openstack-operator-index-d4hsh\" (UID: \"954076ba-3e6f-4e5b-9b3f-4637840d5021\") " pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.554498 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kwkd\" (UniqueName: \"kubernetes.io/projected/954076ba-3e6f-4e5b-9b3f-4637840d5021-kube-api-access-4kwkd\") pod \"openstack-operator-index-d4hsh\" (UID: \"954076ba-3e6f-4e5b-9b3f-4637840d5021\") " pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.574157 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kwkd\" (UniqueName: \"kubernetes.io/projected/954076ba-3e6f-4e5b-9b3f-4637840d5021-kube-api-access-4kwkd\") pod \"openstack-operator-index-d4hsh\" (UID: \"954076ba-3e6f-4e5b-9b3f-4637840d5021\") " pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.736896 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:07:54 crc kubenswrapper[4675]: I0124 07:07:54.105698 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d4hsh"] Jan 24 07:07:54 crc kubenswrapper[4675]: W0124 07:07:54.109248 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod954076ba_3e6f_4e5b_9b3f_4637840d5021.slice/crio-26e6852aede836d510dec6ed21dc43eba7b1a360ecbf541f514fbc239825f15e WatchSource:0}: Error finding container 26e6852aede836d510dec6ed21dc43eba7b1a360ecbf541f514fbc239825f15e: Status 404 returned error can't find the container with id 26e6852aede836d510dec6ed21dc43eba7b1a360ecbf541f514fbc239825f15e Jan 24 07:07:54 crc kubenswrapper[4675]: I0124 07:07:54.171611 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vd22g" podUID="d5597e1b-5874-4483-bf56-679470f1a288" containerName="registry-server" containerID="cri-o://b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187" gracePeriod=2 Jan 24 07:07:54 crc kubenswrapper[4675]: I0124 07:07:54.171914 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d4hsh" event={"ID":"954076ba-3e6f-4e5b-9b3f-4637840d5021","Type":"ContainerStarted","Data":"26e6852aede836d510dec6ed21dc43eba7b1a360ecbf541f514fbc239825f15e"} Jan 24 07:07:54 crc kubenswrapper[4675]: I0124 07:07:54.410256 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:54 crc kubenswrapper[4675]: I0124 07:07:54.591343 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.057447 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.175741 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx6zz\" (UniqueName: \"kubernetes.io/projected/d5597e1b-5874-4483-bf56-679470f1a288-kube-api-access-cx6zz\") pod \"d5597e1b-5874-4483-bf56-679470f1a288\" (UID: \"d5597e1b-5874-4483-bf56-679470f1a288\") " Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.177890 4675 generic.go:334] "Generic (PLEG): container finished" podID="d5597e1b-5874-4483-bf56-679470f1a288" containerID="b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187" exitCode=0 Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.177948 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vd22g" event={"ID":"d5597e1b-5874-4483-bf56-679470f1a288","Type":"ContainerDied","Data":"b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187"} Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.177998 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vd22g" event={"ID":"d5597e1b-5874-4483-bf56-679470f1a288","Type":"ContainerDied","Data":"65792c9a12bd755b36506910452a8b4e058f9cc6c39651f649e047267f59c4e5"} Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.178015 4675 scope.go:117] "RemoveContainer" containerID="b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.178101 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.181574 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d4hsh" event={"ID":"954076ba-3e6f-4e5b-9b3f-4637840d5021","Type":"ContainerStarted","Data":"b47c5bd61aa6b00f6f74f8fd52af5ce36b4de3a3c6e98a3aa959ca097f810639"} Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.182975 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5597e1b-5874-4483-bf56-679470f1a288-kube-api-access-cx6zz" (OuterVolumeSpecName: "kube-api-access-cx6zz") pod "d5597e1b-5874-4483-bf56-679470f1a288" (UID: "d5597e1b-5874-4483-bf56-679470f1a288"). InnerVolumeSpecName "kube-api-access-cx6zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.201716 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-d4hsh" podStartSLOduration=1.5279759560000001 podStartE2EDuration="2.20169601s" podCreationTimestamp="2026-01-24 07:07:53 +0000 UTC" firstStartedPulling="2026-01-24 07:07:54.112984096 +0000 UTC m=+875.409089309" lastFinishedPulling="2026-01-24 07:07:54.78670413 +0000 UTC m=+876.082809363" observedRunningTime="2026-01-24 07:07:55.20000831 +0000 UTC m=+876.496113533" watchObservedRunningTime="2026-01-24 07:07:55.20169601 +0000 UTC m=+876.497801243" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.209466 4675 scope.go:117] "RemoveContainer" containerID="b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187" Jan 24 07:07:55 crc kubenswrapper[4675]: E0124 07:07:55.209967 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187\": container with ID starting with b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187 not found: ID does not exist" containerID="b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.209995 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187"} err="failed to get container status \"b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187\": rpc error: code = NotFound desc = could not find container \"b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187\": container with ID starting with b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187 not found: ID does not exist" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.277358 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx6zz\" (UniqueName: \"kubernetes.io/projected/d5597e1b-5874-4483-bf56-679470f1a288-kube-api-access-cx6zz\") on node \"crc\" DevicePath \"\"" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.514429 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vd22g"] Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.522265 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vd22g"] Jan 24 07:07:56 crc kubenswrapper[4675]: I0124 07:07:56.956623 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5597e1b-5874-4483-bf56-679470f1a288" path="/var/lib/kubelet/pods/d5597e1b-5874-4483-bf56-679470f1a288/volumes" Jan 24 07:08:03 crc kubenswrapper[4675]: I0124 07:08:03.737697 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:08:03 crc kubenswrapper[4675]: I0124 07:08:03.738331 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:08:03 crc kubenswrapper[4675]: I0124 07:08:03.779907 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:08:04 crc kubenswrapper[4675]: I0124 07:08:04.284433 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:08:04 crc kubenswrapper[4675]: I0124 07:08:04.960127 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-78f4w" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.670085 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk"] Jan 24 07:08:05 crc kubenswrapper[4675]: E0124 07:08:05.670373 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5597e1b-5874-4483-bf56-679470f1a288" containerName="registry-server" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.670393 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5597e1b-5874-4483-bf56-679470f1a288" containerName="registry-server" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.670563 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5597e1b-5874-4483-bf56-679470f1a288" containerName="registry-server" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.671859 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.674229 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-knqct" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.684494 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk"] Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.821958 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-bundle\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.822028 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-util\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.822063 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptprz\" (UniqueName: \"kubernetes.io/projected/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-kube-api-access-ptprz\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.923177 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-bundle\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.923216 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-util\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.923241 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptprz\" (UniqueName: \"kubernetes.io/projected/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-kube-api-access-ptprz\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.923807 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-bundle\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.923834 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-util\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.943392 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptprz\" (UniqueName: \"kubernetes.io/projected/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-kube-api-access-ptprz\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.990323 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:06 crc kubenswrapper[4675]: I0124 07:08:06.255422 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk"] Jan 24 07:08:07 crc kubenswrapper[4675]: I0124 07:08:07.261871 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" event={"ID":"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d","Type":"ContainerStarted","Data":"63b8bbc6f3983e757a4b92d9d6c2745edb3e94b79f32c6f55d0bfb6c5ed3ef4c"} Jan 24 07:08:08 crc kubenswrapper[4675]: I0124 07:08:08.269585 4675 generic.go:334] "Generic (PLEG): container finished" podID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerID="02729d3552e2026e205ce65c084a59e6a889858715a6349b9e2e4899a4af312d" exitCode=0 Jan 24 07:08:08 crc kubenswrapper[4675]: I0124 07:08:08.269679 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" event={"ID":"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d","Type":"ContainerDied","Data":"02729d3552e2026e205ce65c084a59e6a889858715a6349b9e2e4899a4af312d"} Jan 24 07:08:08 crc kubenswrapper[4675]: I0124 07:08:08.630493 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:08:08 crc kubenswrapper[4675]: I0124 07:08:08.630550 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:08:09 crc kubenswrapper[4675]: I0124 07:08:09.280089 4675 generic.go:334] "Generic (PLEG): container finished" podID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerID="dd726de03c77019531350763caa8896216b2bcc66927ae88a63165f351c168f1" exitCode=0 Jan 24 07:08:09 crc kubenswrapper[4675]: I0124 07:08:09.280128 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" event={"ID":"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d","Type":"ContainerDied","Data":"dd726de03c77019531350763caa8896216b2bcc66927ae88a63165f351c168f1"} Jan 24 07:08:10 crc kubenswrapper[4675]: I0124 07:08:10.291219 4675 generic.go:334] "Generic (PLEG): container finished" podID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerID="15254b01f8855ef68b4f6025f1248a4d1429df62e079d5e80ebb3f6b15482e0d" exitCode=0 Jan 24 07:08:10 crc kubenswrapper[4675]: I0124 07:08:10.291352 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" event={"ID":"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d","Type":"ContainerDied","Data":"15254b01f8855ef68b4f6025f1248a4d1429df62e079d5e80ebb3f6b15482e0d"} Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.587229 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.704615 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-bundle\") pod \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.704668 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-util\") pod \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.704708 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptprz\" (UniqueName: \"kubernetes.io/projected/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-kube-api-access-ptprz\") pod \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.705420 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-bundle" (OuterVolumeSpecName: "bundle") pod "ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" (UID: "ad9d9d8b-0730-4dc0-bd02-77a7db0b842d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.713026 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-kube-api-access-ptprz" (OuterVolumeSpecName: "kube-api-access-ptprz") pod "ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" (UID: "ad9d9d8b-0730-4dc0-bd02-77a7db0b842d"). InnerVolumeSpecName "kube-api-access-ptprz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.720990 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-util" (OuterVolumeSpecName: "util") pod "ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" (UID: "ad9d9d8b-0730-4dc0-bd02-77a7db0b842d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.806187 4675 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.806217 4675 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-util\") on node \"crc\" DevicePath \"\"" Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.806229 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptprz\" (UniqueName: \"kubernetes.io/projected/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-kube-api-access-ptprz\") on node \"crc\" DevicePath \"\"" Jan 24 07:08:12 crc kubenswrapper[4675]: I0124 07:08:12.304532 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" event={"ID":"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d","Type":"ContainerDied","Data":"63b8bbc6f3983e757a4b92d9d6c2745edb3e94b79f32c6f55d0bfb6c5ed3ef4c"} Jan 24 07:08:12 crc kubenswrapper[4675]: I0124 07:08:12.304563 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63b8bbc6f3983e757a4b92d9d6c2745edb3e94b79f32c6f55d0bfb6c5ed3ef4c" Jan 24 07:08:12 crc kubenswrapper[4675]: I0124 07:08:12.304601 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.766742 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv"] Jan 24 07:08:17 crc kubenswrapper[4675]: E0124 07:08:17.767494 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="util" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.767507 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="util" Jan 24 07:08:17 crc kubenswrapper[4675]: E0124 07:08:17.767519 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="pull" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.767525 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="pull" Jan 24 07:08:17 crc kubenswrapper[4675]: E0124 07:08:17.767533 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="extract" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.767538 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="extract" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.767645 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="extract" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.768025 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.771098 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-z2v5x" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.810036 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv"] Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.884675 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7cqv\" (UniqueName: \"kubernetes.io/projected/fc267189-e8ca-412c-bb9a-6b251571a514-kube-api-access-l7cqv\") pod \"openstack-operator-controller-init-d498c57f9-4vbdv\" (UID: \"fc267189-e8ca-412c-bb9a-6b251571a514\") " pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.985615 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7cqv\" (UniqueName: \"kubernetes.io/projected/fc267189-e8ca-412c-bb9a-6b251571a514-kube-api-access-l7cqv\") pod \"openstack-operator-controller-init-d498c57f9-4vbdv\" (UID: \"fc267189-e8ca-412c-bb9a-6b251571a514\") " pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:18 crc kubenswrapper[4675]: I0124 07:08:18.008968 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7cqv\" (UniqueName: \"kubernetes.io/projected/fc267189-e8ca-412c-bb9a-6b251571a514-kube-api-access-l7cqv\") pod \"openstack-operator-controller-init-d498c57f9-4vbdv\" (UID: \"fc267189-e8ca-412c-bb9a-6b251571a514\") " pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:18 crc kubenswrapper[4675]: I0124 07:08:18.088601 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:18 crc kubenswrapper[4675]: I0124 07:08:18.550591 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv"] Jan 24 07:08:19 crc kubenswrapper[4675]: I0124 07:08:19.341242 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" event={"ID":"fc267189-e8ca-412c-bb9a-6b251571a514","Type":"ContainerStarted","Data":"bc0512432a942e266fb15578d49b90e9d2e33b65a7bcf3d6c4a2d202d504d91f"} Jan 24 07:08:24 crc kubenswrapper[4675]: I0124 07:08:24.382341 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" event={"ID":"fc267189-e8ca-412c-bb9a-6b251571a514","Type":"ContainerStarted","Data":"2b32da462870f3e911e342621f0621b2f306a6bd768e898fba088e501697c129"} Jan 24 07:08:24 crc kubenswrapper[4675]: I0124 07:08:24.383006 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:24 crc kubenswrapper[4675]: I0124 07:08:24.419264 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" podStartSLOduration=2.630818585 podStartE2EDuration="7.419248596s" podCreationTimestamp="2026-01-24 07:08:17 +0000 UTC" firstStartedPulling="2026-01-24 07:08:18.559226664 +0000 UTC m=+899.855331877" lastFinishedPulling="2026-01-24 07:08:23.347656665 +0000 UTC m=+904.643761888" observedRunningTime="2026-01-24 07:08:24.416185912 +0000 UTC m=+905.712291155" watchObservedRunningTime="2026-01-24 07:08:24.419248596 +0000 UTC m=+905.715353819" Jan 24 07:08:28 crc kubenswrapper[4675]: I0124 07:08:28.092487 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:38 crc kubenswrapper[4675]: I0124 07:08:38.630487 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:08:38 crc kubenswrapper[4675]: I0124 07:08:38.631092 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.319315 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.320917 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.324041 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-p5nc5" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.332625 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.383871 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.384733 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.389944 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-b7mrc" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.396699 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.397604 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.400396 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zfws\" (UniqueName: \"kubernetes.io/projected/2db25911-f36e-43ae-8f47-b042ec82266e-kube-api-access-2zfws\") pod \"barbican-operator-controller-manager-7f86f8796f-dwbq6\" (UID: \"2db25911-f36e-43ae-8f47-b042ec82266e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.400438 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbk4x\" (UniqueName: \"kubernetes.io/projected/b8285f65-9930-4bb9-9e18-b6ffe19f45fb-kube-api-access-tbk4x\") pod \"cinder-operator-controller-manager-69cf5d4557-6jbwg\" (UID: \"b8285f65-9930-4bb9-9e18-b6ffe19f45fb\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.413499 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jccng" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.421034 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.440470 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.453320 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.454220 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.458363 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-g4zgx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.503180 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zfws\" (UniqueName: \"kubernetes.io/projected/2db25911-f36e-43ae-8f47-b042ec82266e-kube-api-access-2zfws\") pod \"barbican-operator-controller-manager-7f86f8796f-dwbq6\" (UID: \"2db25911-f36e-43ae-8f47-b042ec82266e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.503234 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbk4x\" (UniqueName: \"kubernetes.io/projected/b8285f65-9930-4bb9-9e18-b6ffe19f45fb-kube-api-access-tbk4x\") pod \"cinder-operator-controller-manager-69cf5d4557-6jbwg\" (UID: \"b8285f65-9930-4bb9-9e18-b6ffe19f45fb\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.503308 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdpth\" (UniqueName: \"kubernetes.io/projected/e7263d16-14c3-4254-821a-cbf99b7cf3e4-kube-api-access-sdpth\") pod \"glance-operator-controller-manager-78fdd796fd-thqtz\" (UID: \"e7263d16-14c3-4254-821a-cbf99b7cf3e4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.503339 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gplvd\" (UniqueName: \"kubernetes.io/projected/6003a1f9-ad0e-49f6-8750-6ac2208560cc-kube-api-access-gplvd\") pod \"designate-operator-controller-manager-b45d7bf98-79fwx\" (UID: \"6003a1f9-ad0e-49f6-8750-6ac2208560cc\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.519484 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.536700 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zfws\" (UniqueName: \"kubernetes.io/projected/2db25911-f36e-43ae-8f47-b042ec82266e-kube-api-access-2zfws\") pod \"barbican-operator-controller-manager-7f86f8796f-dwbq6\" (UID: \"2db25911-f36e-43ae-8f47-b042ec82266e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.550338 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbk4x\" (UniqueName: \"kubernetes.io/projected/b8285f65-9930-4bb9-9e18-b6ffe19f45fb-kube-api-access-tbk4x\") pod \"cinder-operator-controller-manager-69cf5d4557-6jbwg\" (UID: \"b8285f65-9930-4bb9-9e18-b6ffe19f45fb\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.579781 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.580581 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.593124 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-dvdjg" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.594682 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.595375 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.600569 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-tk46k" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.600813 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.610751 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhr9k\" (UniqueName: \"kubernetes.io/projected/7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320-kube-api-access-nhr9k\") pod \"heat-operator-controller-manager-594c8c9d5d-mqk98\" (UID: \"7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.610813 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdpth\" (UniqueName: \"kubernetes.io/projected/e7263d16-14c3-4254-821a-cbf99b7cf3e4-kube-api-access-sdpth\") pod \"glance-operator-controller-manager-78fdd796fd-thqtz\" (UID: \"e7263d16-14c3-4254-821a-cbf99b7cf3e4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.610838 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gplvd\" (UniqueName: \"kubernetes.io/projected/6003a1f9-ad0e-49f6-8750-6ac2208560cc-kube-api-access-gplvd\") pod \"designate-operator-controller-manager-b45d7bf98-79fwx\" (UID: \"6003a1f9-ad0e-49f6-8750-6ac2208560cc\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.626929 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-c5658"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.627732 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.629784 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-c5658"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.634029 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nhg7j" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.634204 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.640733 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.648782 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.671113 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.672019 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.673958 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdpth\" (UniqueName: \"kubernetes.io/projected/e7263d16-14c3-4254-821a-cbf99b7cf3e4-kube-api-access-sdpth\") pod \"glance-operator-controller-manager-78fdd796fd-thqtz\" (UID: \"e7263d16-14c3-4254-821a-cbf99b7cf3e4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.682545 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gplvd\" (UniqueName: \"kubernetes.io/projected/6003a1f9-ad0e-49f6-8750-6ac2208560cc-kube-api-access-gplvd\") pod \"designate-operator-controller-manager-b45d7bf98-79fwx\" (UID: \"6003a1f9-ad0e-49f6-8750-6ac2208560cc\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.698031 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.702093 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-72gdv" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.714379 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkslj\" (UniqueName: \"kubernetes.io/projected/4aa5aa88-c6f2-4000-9a9d-3b14e23220de-kube-api-access-kkslj\") pod \"horizon-operator-controller-manager-77d5c5b54f-67vkh\" (UID: \"4aa5aa88-c6f2-4000-9a9d-3b14e23220de\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.714431 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkwvr\" (UniqueName: \"kubernetes.io/projected/743af71f-3542-439c-b3a1-33a7b9ae34f1-kube-api-access-pkwvr\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.714486 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8bcb\" (UniqueName: \"kubernetes.io/projected/06f423e8-7ba9-497d-a587-cc880d66625b-kube-api-access-j8bcb\") pod \"ironic-operator-controller-manager-598f7747c9-l7jq5\" (UID: \"06f423e8-7ba9-497d-a587-cc880d66625b\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.714514 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.714542 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhr9k\" (UniqueName: \"kubernetes.io/projected/7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320-kube-api-access-nhr9k\") pod \"heat-operator-controller-manager-594c8c9d5d-mqk98\" (UID: \"7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.716525 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.721966 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.736427 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.737250 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.744054 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-bznx5" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.757449 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhr9k\" (UniqueName: \"kubernetes.io/projected/7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320-kube-api-access-nhr9k\") pod \"heat-operator-controller-manager-594c8c9d5d-mqk98\" (UID: \"7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.780050 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.788644 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.789434 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.797073 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.797799 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.805297 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-kgmqr" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.805565 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-g75sx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.808954 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817346 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ljqq\" (UniqueName: \"kubernetes.io/projected/7660e41e-527d-4806-8ef3-6dee25fa72c5-kube-api-access-8ljqq\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-vjf84\" (UID: \"7660e41e-527d-4806-8ef3-6dee25fa72c5\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817413 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn97d\" (UniqueName: \"kubernetes.io/projected/5b3a45f7-a1eb-44a2-b0be-7c77b190d50c-kube-api-access-mn97d\") pod \"keystone-operator-controller-manager-b8b6d4659-bqd4q\" (UID: \"5b3a45f7-a1eb-44a2-b0be-7c77b190d50c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817434 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8bcb\" (UniqueName: \"kubernetes.io/projected/06f423e8-7ba9-497d-a587-cc880d66625b-kube-api-access-j8bcb\") pod \"ironic-operator-controller-manager-598f7747c9-l7jq5\" (UID: \"06f423e8-7ba9-497d-a587-cc880d66625b\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817462 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817486 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8stn\" (UniqueName: \"kubernetes.io/projected/e09ce8a8-a2a4-4fec-b36d-a97910aced0f-kube-api-access-z8stn\") pod \"manila-operator-controller-manager-78c6999f6f-6lq96\" (UID: \"e09ce8a8-a2a4-4fec-b36d-a97910aced0f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817530 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkslj\" (UniqueName: \"kubernetes.io/projected/4aa5aa88-c6f2-4000-9a9d-3b14e23220de-kube-api-access-kkslj\") pod \"horizon-operator-controller-manager-77d5c5b54f-67vkh\" (UID: \"4aa5aa88-c6f2-4000-9a9d-3b14e23220de\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817558 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkwvr\" (UniqueName: \"kubernetes.io/projected/743af71f-3542-439c-b3a1-33a7b9ae34f1-kube-api-access-pkwvr\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:47 crc kubenswrapper[4675]: E0124 07:08:47.817910 4675 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:47 crc kubenswrapper[4675]: E0124 07:08:47.817950 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert podName:743af71f-3542-439c-b3a1-33a7b9ae34f1 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:48.31793455 +0000 UTC m=+929.614039773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert") pod "infra-operator-controller-manager-694cf4f878-c5658" (UID: "743af71f-3542-439c-b3a1-33a7b9ae34f1") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.821169 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.855507 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8bcb\" (UniqueName: \"kubernetes.io/projected/06f423e8-7ba9-497d-a587-cc880d66625b-kube-api-access-j8bcb\") pod \"ironic-operator-controller-manager-598f7747c9-l7jq5\" (UID: \"06f423e8-7ba9-497d-a587-cc880d66625b\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.863852 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkslj\" (UniqueName: \"kubernetes.io/projected/4aa5aa88-c6f2-4000-9a9d-3b14e23220de-kube-api-access-kkslj\") pod \"horizon-operator-controller-manager-77d5c5b54f-67vkh\" (UID: \"4aa5aa88-c6f2-4000-9a9d-3b14e23220de\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.894403 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkwvr\" (UniqueName: \"kubernetes.io/projected/743af71f-3542-439c-b3a1-33a7b9ae34f1-kube-api-access-pkwvr\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.914798 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.919479 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn97d\" (UniqueName: \"kubernetes.io/projected/5b3a45f7-a1eb-44a2-b0be-7c77b190d50c-kube-api-access-mn97d\") pod \"keystone-operator-controller-manager-b8b6d4659-bqd4q\" (UID: \"5b3a45f7-a1eb-44a2-b0be-7c77b190d50c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.919563 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8stn\" (UniqueName: \"kubernetes.io/projected/e09ce8a8-a2a4-4fec-b36d-a97910aced0f-kube-api-access-z8stn\") pod \"manila-operator-controller-manager-78c6999f6f-6lq96\" (UID: \"e09ce8a8-a2a4-4fec-b36d-a97910aced0f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.919627 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ljqq\" (UniqueName: \"kubernetes.io/projected/7660e41e-527d-4806-8ef3-6dee25fa72c5-kube-api-access-8ljqq\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-vjf84\" (UID: \"7660e41e-527d-4806-8ef3-6dee25fa72c5\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.942309 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.942834 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.976682 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8stn\" (UniqueName: \"kubernetes.io/projected/e09ce8a8-a2a4-4fec-b36d-a97910aced0f-kube-api-access-z8stn\") pod \"manila-operator-controller-manager-78c6999f6f-6lq96\" (UID: \"e09ce8a8-a2a4-4fec-b36d-a97910aced0f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.977450 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.004419 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.006469 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn97d\" (UniqueName: \"kubernetes.io/projected/5b3a45f7-a1eb-44a2-b0be-7c77b190d50c-kube-api-access-mn97d\") pod \"keystone-operator-controller-manager-b8b6d4659-bqd4q\" (UID: \"5b3a45f7-a1eb-44a2-b0be-7c77b190d50c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.029055 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-p66r4" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.029759 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.030487 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.044208 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.044385 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99qz5\" (UniqueName: \"kubernetes.io/projected/724ac56d-9f4e-40f9-98f7-3a65c807f89c-kube-api-access-99qz5\") pod \"neutron-operator-controller-manager-78d58447c5-dzvlp\" (UID: \"724ac56d-9f4e-40f9-98f7-3a65c807f89c\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.045575 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.048404 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ljqq\" (UniqueName: \"kubernetes.io/projected/7660e41e-527d-4806-8ef3-6dee25fa72c5-kube-api-access-8ljqq\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-vjf84\" (UID: \"7660e41e-527d-4806-8ef3-6dee25fa72c5\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.051245 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-skxlb" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.065772 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.075029 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.075853 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.088235 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.103454 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-mjrhf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.110813 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.111659 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.112298 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.122013 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.122953 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.133993 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.148242 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jx2pj" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.148653 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.148814 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-mlccn" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.150362 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99qz5\" (UniqueName: \"kubernetes.io/projected/724ac56d-9f4e-40f9-98f7-3a65c807f89c-kube-api-access-99qz5\") pod \"neutron-operator-controller-manager-78d58447c5-dzvlp\" (UID: \"724ac56d-9f4e-40f9-98f7-3a65c807f89c\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.150420 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pf24\" (UniqueName: \"kubernetes.io/projected/bdc167a3-9335-4b3d-9696-a1d03b9ae618-kube-api-access-5pf24\") pod \"octavia-operator-controller-manager-7bd9774b6-q6qn9\" (UID: \"bdc167a3-9335-4b3d-9696-a1d03b9ae618\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.150511 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7k4s\" (UniqueName: \"kubernetes.io/projected/6f867475-7eee-431c-97ee-12ae861193c7-kube-api-access-l7k4s\") pod \"nova-operator-controller-manager-6b8bc8d87d-4lmvf\" (UID: \"6f867475-7eee-431c-97ee-12ae861193c7\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.158776 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.165786 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.179083 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.179861 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.190239 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4zjqq" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.212253 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.229840 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.244410 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.245514 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.250786 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-lp9nd" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.265407 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pf24\" (UniqueName: \"kubernetes.io/projected/bdc167a3-9335-4b3d-9696-a1d03b9ae618-kube-api-access-5pf24\") pod \"octavia-operator-controller-manager-7bd9774b6-q6qn9\" (UID: \"bdc167a3-9335-4b3d-9696-a1d03b9ae618\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.265447 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.265531 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmzsm\" (UniqueName: \"kubernetes.io/projected/a1041f21-5d7d-4b17-84ff-ee83332e604d-kube-api-access-rmzsm\") pod \"ovn-operator-controller-manager-55db956ddc-n4kll\" (UID: \"a1041f21-5d7d-4b17-84ff-ee83332e604d\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.265559 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjbsl\" (UniqueName: \"kubernetes.io/projected/20b0ee18-4569-4428-956f-d8795904f368-kube-api-access-bjbsl\") pod \"placement-operator-controller-manager-5d646b7d76-l5hrz\" (UID: \"20b0ee18-4569-4428-956f-d8795904f368\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.265578 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7k4s\" (UniqueName: \"kubernetes.io/projected/6f867475-7eee-431c-97ee-12ae861193c7-kube-api-access-l7k4s\") pod \"nova-operator-controller-manager-6b8bc8d87d-4lmvf\" (UID: \"6f867475-7eee-431c-97ee-12ae861193c7\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.265603 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtsj2\" (UniqueName: \"kubernetes.io/projected/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-kube-api-access-jtsj2\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.266604 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99qz5\" (UniqueName: \"kubernetes.io/projected/724ac56d-9f4e-40f9-98f7-3a65c807f89c-kube-api-access-99qz5\") pod \"neutron-operator-controller-manager-78d58447c5-dzvlp\" (UID: \"724ac56d-9f4e-40f9-98f7-3a65c807f89c\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.305665 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.306194 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pf24\" (UniqueName: \"kubernetes.io/projected/bdc167a3-9335-4b3d-9696-a1d03b9ae618-kube-api-access-5pf24\") pod \"octavia-operator-controller-manager-7bd9774b6-q6qn9\" (UID: \"bdc167a3-9335-4b3d-9696-a1d03b9ae618\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.324972 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.325775 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.336467 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7k4s\" (UniqueName: \"kubernetes.io/projected/6f867475-7eee-431c-97ee-12ae861193c7-kube-api-access-l7k4s\") pod \"nova-operator-controller-manager-6b8bc8d87d-4lmvf\" (UID: \"6f867475-7eee-431c-97ee-12ae861193c7\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.339458 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-l9j7r" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.345977 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.346974 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369657 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369696 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369750 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lnq2\" (UniqueName: \"kubernetes.io/projected/4bfb9011-058d-494d-96ce-a39202c7b851-kube-api-access-8lnq2\") pod \"swift-operator-controller-manager-7d55b89685-9rvmf\" (UID: \"4bfb9011-058d-494d-96ce-a39202c7b851\") " pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369776 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng2kw\" (UniqueName: \"kubernetes.io/projected/47e89f8e-f652-43a1-a36a-2db184700f3e-kube-api-access-ng2kw\") pod \"telemetry-operator-controller-manager-85cd9769bb-n6jmw\" (UID: \"47e89f8e-f652-43a1-a36a-2db184700f3e\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369794 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmzsm\" (UniqueName: \"kubernetes.io/projected/a1041f21-5d7d-4b17-84ff-ee83332e604d-kube-api-access-rmzsm\") pod \"ovn-operator-controller-manager-55db956ddc-n4kll\" (UID: \"a1041f21-5d7d-4b17-84ff-ee83332e604d\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369819 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjbsl\" (UniqueName: \"kubernetes.io/projected/20b0ee18-4569-4428-956f-d8795904f368-kube-api-access-bjbsl\") pod \"placement-operator-controller-manager-5d646b7d76-l5hrz\" (UID: \"20b0ee18-4569-4428-956f-d8795904f368\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369844 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtsj2\" (UniqueName: \"kubernetes.io/projected/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-kube-api-access-jtsj2\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.371066 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.371155 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert podName:ac97fbc7-211e-41e3-8e16-aff853a7c9f4 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:48.87113958 +0000 UTC m=+930.167244803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" (UID: "ac97fbc7-211e-41e3-8e16-aff853a7c9f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.371362 4675 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.371397 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert podName:743af71f-3542-439c-b3a1-33a7b9ae34f1 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:49.371387225 +0000 UTC m=+930.667492458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert") pod "infra-operator-controller-manager-694cf4f878-c5658" (UID: "743af71f-3542-439c-b3a1-33a7b9ae34f1") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.381875 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.385935 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-rc85t" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.390271 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.412601 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjbsl\" (UniqueName: \"kubernetes.io/projected/20b0ee18-4569-4428-956f-d8795904f368-kube-api-access-bjbsl\") pod \"placement-operator-controller-manager-5d646b7d76-l5hrz\" (UID: \"20b0ee18-4569-4428-956f-d8795904f368\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.413021 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.423250 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.424640 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtsj2\" (UniqueName: \"kubernetes.io/projected/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-kube-api-access-jtsj2\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.439955 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.441530 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.447966 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmzsm\" (UniqueName: \"kubernetes.io/projected/a1041f21-5d7d-4b17-84ff-ee83332e604d-kube-api-access-rmzsm\") pod \"ovn-operator-controller-manager-55db956ddc-n4kll\" (UID: \"a1041f21-5d7d-4b17-84ff-ee83332e604d\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.448245 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.448552 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.457735 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9ldcr" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.469941 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.470698 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lnq2\" (UniqueName: \"kubernetes.io/projected/4bfb9011-058d-494d-96ce-a39202c7b851-kube-api-access-8lnq2\") pod \"swift-operator-controller-manager-7d55b89685-9rvmf\" (UID: \"4bfb9011-058d-494d-96ce-a39202c7b851\") " pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.470750 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6z8w\" (UniqueName: \"kubernetes.io/projected/fae349a1-6c08-4424-abe2-42dddccd55cc-kube-api-access-s6z8w\") pod \"test-operator-controller-manager-69797bbcbd-k7crk\" (UID: \"fae349a1-6c08-4424-abe2-42dddccd55cc\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.470840 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng2kw\" (UniqueName: \"kubernetes.io/projected/47e89f8e-f652-43a1-a36a-2db184700f3e-kube-api-access-ng2kw\") pod \"telemetry-operator-controller-manager-85cd9769bb-n6jmw\" (UID: \"47e89f8e-f652-43a1-a36a-2db184700f3e\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.512842 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng2kw\" (UniqueName: \"kubernetes.io/projected/47e89f8e-f652-43a1-a36a-2db184700f3e-kube-api-access-ng2kw\") pod \"telemetry-operator-controller-manager-85cd9769bb-n6jmw\" (UID: \"47e89f8e-f652-43a1-a36a-2db184700f3e\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.513553 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lnq2\" (UniqueName: \"kubernetes.io/projected/4bfb9011-058d-494d-96ce-a39202c7b851-kube-api-access-8lnq2\") pod \"swift-operator-controller-manager-7d55b89685-9rvmf\" (UID: \"4bfb9011-058d-494d-96ce-a39202c7b851\") " pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.557005 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.572165 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6z8w\" (UniqueName: \"kubernetes.io/projected/fae349a1-6c08-4424-abe2-42dddccd55cc-kube-api-access-s6z8w\") pod \"test-operator-controller-manager-69797bbcbd-k7crk\" (UID: \"fae349a1-6c08-4424-abe2-42dddccd55cc\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.572237 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vq2z\" (UniqueName: \"kubernetes.io/projected/f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480-kube-api-access-8vq2z\") pod \"watcher-operator-controller-manager-6d9458688d-9fkjr\" (UID: \"f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.584086 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.646355 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.648043 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.712734 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vq2z\" (UniqueName: \"kubernetes.io/projected/f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480-kube-api-access-8vq2z\") pod \"watcher-operator-controller-manager-6d9458688d-9fkjr\" (UID: \"f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.713272 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.725266 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6z8w\" (UniqueName: \"kubernetes.io/projected/fae349a1-6c08-4424-abe2-42dddccd55cc-kube-api-access-s6z8w\") pod \"test-operator-controller-manager-69797bbcbd-k7crk\" (UID: \"fae349a1-6c08-4424-abe2-42dddccd55cc\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.731742 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.732707 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nv27l" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.732869 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.817819 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.826902 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdz6d\" (UniqueName: \"kubernetes.io/projected/d94b056e-c445-4033-8d02-a794dae4b671-kube-api-access-zdz6d\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.826995 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.827030 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.841437 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.842276 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.848760 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vq2z\" (UniqueName: \"kubernetes.io/projected/f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480-kube-api-access-8vq2z\") pod \"watcher-operator-controller-manager-6d9458688d-9fkjr\" (UID: \"f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.851061 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-tjzcp" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.853700 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.928619 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.928952 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdz6d\" (UniqueName: \"kubernetes.io/projected/d94b056e-c445-4033-8d02-a794dae4b671-kube-api-access-zdz6d\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.929069 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.929185 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.929442 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.929557 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:49.429542195 +0000 UTC m=+930.725647418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "metrics-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.929677 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.929761 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert podName:ac97fbc7-211e-41e3-8e16-aff853a7c9f4 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:49.92974186 +0000 UTC m=+931.225847143 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" (UID: "ac97fbc7-211e-41e3-8e16-aff853a7c9f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.929811 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.929845 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:49.429834022 +0000 UTC m=+930.725939345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.970566 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdz6d\" (UniqueName: \"kubernetes.io/projected/d94b056e-c445-4033-8d02-a794dae4b671-kube-api-access-zdz6d\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.008945 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.030366 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkrf8\" (UniqueName: \"kubernetes.io/projected/b7d1f492-700c-492e-a1c2-eae496f0133c-kube-api-access-tkrf8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9cmpf\" (UID: \"b7d1f492-700c-492e-a1c2-eae496f0133c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.105011 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.133005 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkrf8\" (UniqueName: \"kubernetes.io/projected/b7d1f492-700c-492e-a1c2-eae496f0133c-kube-api-access-tkrf8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9cmpf\" (UID: \"b7d1f492-700c-492e-a1c2-eae496f0133c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.168817 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz"] Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.177042 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkrf8\" (UniqueName: \"kubernetes.io/projected/b7d1f492-700c-492e-a1c2-eae496f0133c-kube-api-access-tkrf8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9cmpf\" (UID: \"b7d1f492-700c-492e-a1c2-eae496f0133c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.211992 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.441516 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.441575 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.441653 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.441791 4675 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.441838 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert podName:743af71f-3542-439c-b3a1-33a7b9ae34f1 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:51.441821799 +0000 UTC m=+932.737927012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert") pod "infra-operator-controller-manager-694cf4f878-c5658" (UID: "743af71f-3542-439c-b3a1-33a7b9ae34f1") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.441881 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.441904 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:50.441896671 +0000 UTC m=+931.738001894 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "webhook-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.441934 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.441951 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:50.441945922 +0000 UTC m=+931.738051145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "metrics-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.585593 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" event={"ID":"e7263d16-14c3-4254-821a-cbf99b7cf3e4","Type":"ContainerStarted","Data":"4053dc81ded619efc039589e0c94d1e4e592b2943cf7a7328e5e39cf06982048"} Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.668164 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh"] Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.685662 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6"] Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.723813 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg"] Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.740468 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx"] Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.764591 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98"] Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.954440 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.955204 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.955264 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert podName:ac97fbc7-211e-41e3-8e16-aff853a7c9f4 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:51.955248121 +0000 UTC m=+933.251353344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" (UID: "ac97fbc7-211e-41e3-8e16-aff853a7c9f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.008715 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.022174 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.035477 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.042103 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.049860 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.064189 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.074832 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll"] Jan 24 07:08:50 crc kubenswrapper[4675]: W0124 07:08:50.100307 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1041f21_5d7d_4b17_84ff_ee83332e604d.slice/crio-5fcd60245292c84bb47812631264b2d215fdc23f0fea80c4102663b139582e1d WatchSource:0}: Error finding container 5fcd60245292c84bb47812631264b2d215fdc23f0fea80c4102663b139582e1d: Status 404 returned error can't find the container with id 5fcd60245292c84bb47812631264b2d215fdc23f0fea80c4102663b139582e1d Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.333419 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-99qz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-dzvlp_openstack-operators(724ac56d-9f4e-40f9-98f7-3a65c807f89c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.335019 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" podUID="724ac56d-9f4e-40f9-98f7-3a65c807f89c" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.335358 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ng2kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-n6jmw_openstack-operators(47e89f8e-f652-43a1-a36a-2db184700f3e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.336482 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" podUID="47e89f8e-f652-43a1-a36a-2db184700f3e" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.337691 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9"] Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.347494 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bjbsl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-l5hrz_openstack-operators(20b0ee18-4569-4428-956f-d8795904f368): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.348869 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" podUID="20b0ee18-4569-4428-956f-d8795904f368" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.351624 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tkrf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9cmpf_openstack-operators(b7d1f492-700c-492e-a1c2-eae496f0133c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.353259 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" podUID="b7d1f492-700c-492e-a1c2-eae496f0133c" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.358485 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vq2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6d9458688d-9fkjr_openstack-operators(f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.359776 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" podUID="f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.361791 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf"] Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.364737 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s6z8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-k7crk_openstack-operators(fae349a1-6c08-4424-abe2-42dddccd55cc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.366296 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" podUID="fae349a1-6c08-4424-abe2-42dddccd55cc" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.375777 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.382330 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.388970 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.400134 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.410212 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.465023 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.465076 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.465155 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.465212 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.465224 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:52.465204899 +0000 UTC m=+933.761310122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "webhook-server-cert" not found Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.465262 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:52.46524648 +0000 UTC m=+933.761351703 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "metrics-server-cert" not found Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.621578 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" event={"ID":"bdc167a3-9335-4b3d-9696-a1d03b9ae618","Type":"ContainerStarted","Data":"fce3ed873352b3493004b7f12851e86940064ee59fb36cfe44b6d0f590037607"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.625755 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" event={"ID":"6f867475-7eee-431c-97ee-12ae861193c7","Type":"ContainerStarted","Data":"b5547f9e5f6a00940af360ed218e79f5b9ac2007b13ea38c20233553047b9abe"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.634148 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" event={"ID":"5b3a45f7-a1eb-44a2-b0be-7c77b190d50c","Type":"ContainerStarted","Data":"c99108618bd169b32757bf909a7174483c424c7f08fb210aa89d23c3d6e3cba6"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.640671 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" event={"ID":"6003a1f9-ad0e-49f6-8750-6ac2208560cc","Type":"ContainerStarted","Data":"f1e05f41e78633866be273d44341a8e2f9eb9cb7a3dec40bc58871323e2dba19"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.661913 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" event={"ID":"47e89f8e-f652-43a1-a36a-2db184700f3e","Type":"ContainerStarted","Data":"4c99997aae050a8edbbe41f3b19aa65e8448ec5f6d041c7f49781fd5bad7b72f"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.664042 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" event={"ID":"b7d1f492-700c-492e-a1c2-eae496f0133c","Type":"ContainerStarted","Data":"bfd8c3fa740f2aef55a90aadb35c7febf7b161cb0a2bd830ac2fd795b119eade"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.667098 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" event={"ID":"724ac56d-9f4e-40f9-98f7-3a65c807f89c","Type":"ContainerStarted","Data":"25414a42c86c2ab6df8f50725eec9c0eca9a569caa144a89ee212298468b3628"} Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.668158 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" podUID="47e89f8e-f652-43a1-a36a-2db184700f3e" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.670589 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" event={"ID":"7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320","Type":"ContainerStarted","Data":"1b59696e894c8157e6c64d947458f64b4320dcc604f4a05fd3b4036aa5813d5c"} Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.671064 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" podUID="724ac56d-9f4e-40f9-98f7-3a65c807f89c" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.671225 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" podUID="b7d1f492-700c-492e-a1c2-eae496f0133c" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.672357 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" event={"ID":"2db25911-f36e-43ae-8f47-b042ec82266e","Type":"ContainerStarted","Data":"d4bed27cff2eabfb8c25285528887e3a70feb9940f93a0bdfa2460613ed2771f"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.688456 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" event={"ID":"b8285f65-9930-4bb9-9e18-b6ffe19f45fb","Type":"ContainerStarted","Data":"c7c2f86a133de577b33d63eea556b7c9b1470aef1c2015f7c53bc997ec1dbfeb"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.699277 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" event={"ID":"a1041f21-5d7d-4b17-84ff-ee83332e604d","Type":"ContainerStarted","Data":"5fcd60245292c84bb47812631264b2d215fdc23f0fea80c4102663b139582e1d"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.709077 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" event={"ID":"4bfb9011-058d-494d-96ce-a39202c7b851","Type":"ContainerStarted","Data":"2f4654a2a38ec5baf7e572a02b42acadf88bd8b7324099b6a00bf4345aca34cd"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.732104 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" event={"ID":"e09ce8a8-a2a4-4fec-b36d-a97910aced0f","Type":"ContainerStarted","Data":"0ad7f68cf58bc6d341e8795389c57b293effebebd8a3888f5220c7b91f0aacc6"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.734485 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" event={"ID":"fae349a1-6c08-4424-abe2-42dddccd55cc","Type":"ContainerStarted","Data":"a34a9f6f3a30b7b468bd8304b834fa90c36ad04497f3848d649af1d9008f5b7a"} Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.739714 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" podUID="fae349a1-6c08-4424-abe2-42dddccd55cc" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.740059 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" event={"ID":"20b0ee18-4569-4428-956f-d8795904f368","Type":"ContainerStarted","Data":"d40fcf26a929d07e25381dc52b422c987ee8d2debd4c9e844c671b533fcdbbde"} Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.742878 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" podUID="20b0ee18-4569-4428-956f-d8795904f368" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.755095 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" event={"ID":"4aa5aa88-c6f2-4000-9a9d-3b14e23220de","Type":"ContainerStarted","Data":"d417a22a7974fc952028863ef25535bbc30f67500cadb12266638641e5211b05"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.759089 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" event={"ID":"f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480","Type":"ContainerStarted","Data":"0a0985f908884d42b0c2caf26278516449cb3caaef568692c6a155d434d43662"} Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.760662 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" podUID="f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.761902 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" event={"ID":"7660e41e-527d-4806-8ef3-6dee25fa72c5","Type":"ContainerStarted","Data":"a39cf7750ebe2640001bfc7a264c967c35007bed7fa7dcd37ba31301a285993b"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.795619 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" event={"ID":"06f423e8-7ba9-497d-a587-cc880d66625b","Type":"ContainerStarted","Data":"1e41838d4284600ddbd83652937577974fd8e8d7062fe0166461d959137f4297"} Jan 24 07:08:51 crc kubenswrapper[4675]: I0124 07:08:51.480892 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.481040 4675 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.481106 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert podName:743af71f-3542-439c-b3a1-33a7b9ae34f1 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:55.481087827 +0000 UTC m=+936.777193050 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert") pod "infra-operator-controller-manager-694cf4f878-c5658" (UID: "743af71f-3542-439c-b3a1-33a7b9ae34f1") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.807732 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" podUID="724ac56d-9f4e-40f9-98f7-3a65c807f89c" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.807745 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" podUID="fae349a1-6c08-4424-abe2-42dddccd55cc" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.807748 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" podUID="20b0ee18-4569-4428-956f-d8795904f368" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.807878 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" podUID="f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.808040 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" podUID="47e89f8e-f652-43a1-a36a-2db184700f3e" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.812617 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" podUID="b7d1f492-700c-492e-a1c2-eae496f0133c" Jan 24 07:08:51 crc kubenswrapper[4675]: I0124 07:08:51.986914 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.987504 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.987756 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert podName:ac97fbc7-211e-41e3-8e16-aff853a7c9f4 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:55.987579201 +0000 UTC m=+937.283684424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" (UID: "ac97fbc7-211e-41e3-8e16-aff853a7c9f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:52 crc kubenswrapper[4675]: E0124 07:08:52.493424 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:08:52 crc kubenswrapper[4675]: E0124 07:08:52.493497 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:56.493481941 +0000 UTC m=+937.789587164 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "webhook-server-cert" not found Jan 24 07:08:52 crc kubenswrapper[4675]: I0124 07:08:52.493306 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:52 crc kubenswrapper[4675]: I0124 07:08:52.493898 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:52 crc kubenswrapper[4675]: E0124 07:08:52.493982 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:08:52 crc kubenswrapper[4675]: E0124 07:08:52.494013 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:56.494005524 +0000 UTC m=+937.790110747 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "metrics-server-cert" not found Jan 24 07:08:55 crc kubenswrapper[4675]: I0124 07:08:55.543871 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:55 crc kubenswrapper[4675]: E0124 07:08:55.543995 4675 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:55 crc kubenswrapper[4675]: E0124 07:08:55.544705 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert podName:743af71f-3542-439c-b3a1-33a7b9ae34f1 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:03.544680454 +0000 UTC m=+944.840785677 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert") pod "infra-operator-controller-manager-694cf4f878-c5658" (UID: "743af71f-3542-439c-b3a1-33a7b9ae34f1") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:56 crc kubenswrapper[4675]: I0124 07:08:56.055868 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:56 crc kubenswrapper[4675]: E0124 07:08:56.056038 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:56 crc kubenswrapper[4675]: E0124 07:08:56.056138 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert podName:ac97fbc7-211e-41e3-8e16-aff853a7c9f4 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:04.056116117 +0000 UTC m=+945.352221350 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" (UID: "ac97fbc7-211e-41e3-8e16-aff853a7c9f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:56 crc kubenswrapper[4675]: I0124 07:08:56.561908 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:56 crc kubenswrapper[4675]: I0124 07:08:56.561960 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:56 crc kubenswrapper[4675]: E0124 07:08:56.562112 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:08:56 crc kubenswrapper[4675]: E0124 07:08:56.562213 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:04.562191161 +0000 UTC m=+945.858296394 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "webhook-server-cert" not found Jan 24 07:08:56 crc kubenswrapper[4675]: E0124 07:08:56.562124 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:08:56 crc kubenswrapper[4675]: E0124 07:08:56.562307 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:04.562287053 +0000 UTC m=+945.858392296 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "metrics-server-cert" not found Jan 24 07:09:01 crc kubenswrapper[4675]: E0124 07:09:01.393822 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.46:5001/openstack-k8s-operators/swift-operator:3f3d4b5b93dec19b0d73b14b970587e1a5690ecb" Jan 24 07:09:01 crc kubenswrapper[4675]: E0124 07:09:01.394470 4675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.46:5001/openstack-k8s-operators/swift-operator:3f3d4b5b93dec19b0d73b14b970587e1a5690ecb" Jan 24 07:09:01 crc kubenswrapper[4675]: E0124 07:09:01.394628 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.46:5001/openstack-k8s-operators/swift-operator:3f3d4b5b93dec19b0d73b14b970587e1a5690ecb,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8lnq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-7d55b89685-9rvmf_openstack-operators(4bfb9011-058d-494d-96ce-a39202c7b851): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:01 crc kubenswrapper[4675]: E0124 07:09:01.395923 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" podUID="4bfb9011-058d-494d-96ce-a39202c7b851" Jan 24 07:09:01 crc kubenswrapper[4675]: E0124 07:09:01.978202 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.46:5001/openstack-k8s-operators/swift-operator:3f3d4b5b93dec19b0d73b14b970587e1a5690ecb\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" podUID="4bfb9011-058d-494d-96ce-a39202c7b851" Jan 24 07:09:03 crc kubenswrapper[4675]: E0124 07:09:03.074707 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 24 07:09:03 crc kubenswrapper[4675]: E0124 07:09:03.074985 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rmzsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-n4kll_openstack-operators(a1041f21-5d7d-4b17-84ff-ee83332e604d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:03 crc kubenswrapper[4675]: E0124 07:09:03.076665 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" podUID="a1041f21-5d7d-4b17-84ff-ee83332e604d" Jan 24 07:09:03 crc kubenswrapper[4675]: I0124 07:09:03.576595 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:09:03 crc kubenswrapper[4675]: I0124 07:09:03.583589 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:09:03 crc kubenswrapper[4675]: I0124 07:09:03.850280 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:09:03 crc kubenswrapper[4675]: E0124 07:09:03.994346 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" podUID="a1041f21-5d7d-4b17-84ff-ee83332e604d" Jan 24 07:09:04 crc kubenswrapper[4675]: I0124 07:09:04.083823 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.083976 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.084066 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert podName:ac97fbc7-211e-41e3-8e16-aff853a7c9f4 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:20.084043858 +0000 UTC m=+961.380149081 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" (UID: "ac97fbc7-211e-41e3-8e16-aff853a7c9f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.401272 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.401523 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nhr9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-mqk98_openstack-operators(7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.403534 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" podUID="7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320" Jan 24 07:09:04 crc kubenswrapper[4675]: I0124 07:09:04.590138 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:04 crc kubenswrapper[4675]: I0124 07:09:04.590219 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.590298 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.590353 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.590370 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:20.590350327 +0000 UTC m=+961.886455550 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "webhook-server-cert" not found Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.590389 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:20.590379117 +0000 UTC m=+961.886484340 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "metrics-server-cert" not found Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.965905 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.966213 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5pf24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-q6qn9_openstack-operators(bdc167a3-9335-4b3d-9696-a1d03b9ae618): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.967414 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" podUID="bdc167a3-9335-4b3d-9696-a1d03b9ae618" Jan 24 07:09:05 crc kubenswrapper[4675]: E0124 07:09:05.002025 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" podUID="7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320" Jan 24 07:09:05 crc kubenswrapper[4675]: E0124 07:09:05.008075 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" podUID="bdc167a3-9335-4b3d-9696-a1d03b9ae618" Jan 24 07:09:06 crc kubenswrapper[4675]: E0124 07:09:06.428563 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 24 07:09:06 crc kubenswrapper[4675]: E0124 07:09:06.429421 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8stn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-6lq96_openstack-operators(e09ce8a8-a2a4-4fec-b36d-a97910aced0f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:06 crc kubenswrapper[4675]: E0124 07:09:06.430705 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" podUID="e09ce8a8-a2a4-4fec-b36d-a97910aced0f" Jan 24 07:09:07 crc kubenswrapper[4675]: E0124 07:09:07.005471 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd" Jan 24 07:09:07 crc kubenswrapper[4675]: E0124 07:09:07.005635 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2zfws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-dwbq6_openstack-operators(2db25911-f36e-43ae-8f47-b042ec82266e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:07 crc kubenswrapper[4675]: E0124 07:09:07.008098 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" podUID="2db25911-f36e-43ae-8f47-b042ec82266e" Jan 24 07:09:07 crc kubenswrapper[4675]: E0124 07:09:07.018817 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" podUID="2db25911-f36e-43ae-8f47-b042ec82266e" Jan 24 07:09:07 crc kubenswrapper[4675]: E0124 07:09:07.020602 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" podUID="e09ce8a8-a2a4-4fec-b36d-a97910aced0f" Jan 24 07:09:08 crc kubenswrapper[4675]: I0124 07:09:08.630153 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:09:08 crc kubenswrapper[4675]: I0124 07:09:08.630242 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:09:08 crc kubenswrapper[4675]: I0124 07:09:08.630305 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:09:08 crc kubenswrapper[4675]: I0124 07:09:08.631273 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ae90be563283b996d1b10bf3ad8715e03978ae7930422faef174e860a3bf62d"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:09:08 crc kubenswrapper[4675]: I0124 07:09:08.631352 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://9ae90be563283b996d1b10bf3ad8715e03978ae7930422faef174e860a3bf62d" gracePeriod=600 Jan 24 07:09:10 crc kubenswrapper[4675]: I0124 07:09:10.040981 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="9ae90be563283b996d1b10bf3ad8715e03978ae7930422faef174e860a3bf62d" exitCode=0 Jan 24 07:09:10 crc kubenswrapper[4675]: I0124 07:09:10.041058 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"9ae90be563283b996d1b10bf3ad8715e03978ae7930422faef174e860a3bf62d"} Jan 24 07:09:10 crc kubenswrapper[4675]: I0124 07:09:10.041174 4675 scope.go:117] "RemoveContainer" containerID="ac5cd34383b94a74f69690862b304069f07aa99a5c5c4c95b3f3f978f0196984" Jan 24 07:09:14 crc kubenswrapper[4675]: E0124 07:09:14.071502 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 24 07:09:14 crc kubenswrapper[4675]: E0124 07:09:14.072140 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gplvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-79fwx_openstack-operators(6003a1f9-ad0e-49f6-8750-6ac2208560cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:14 crc kubenswrapper[4675]: E0124 07:09:14.073876 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" podUID="6003a1f9-ad0e-49f6-8750-6ac2208560cc" Jan 24 07:09:14 crc kubenswrapper[4675]: E0124 07:09:14.578973 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 24 07:09:14 crc kubenswrapper[4675]: E0124 07:09:14.579175 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mn97d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-bqd4q_openstack-operators(5b3a45f7-a1eb-44a2-b0be-7c77b190d50c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:14 crc kubenswrapper[4675]: E0124 07:09:14.581220 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" podUID="5b3a45f7-a1eb-44a2-b0be-7c77b190d50c" Jan 24 07:09:15 crc kubenswrapper[4675]: E0124 07:09:15.074509 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" podUID="5b3a45f7-a1eb-44a2-b0be-7c77b190d50c" Jan 24 07:09:15 crc kubenswrapper[4675]: E0124 07:09:15.074592 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" podUID="6003a1f9-ad0e-49f6-8750-6ac2208560cc" Jan 24 07:09:15 crc kubenswrapper[4675]: E0124 07:09:15.099947 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 24 07:09:15 crc kubenswrapper[4675]: E0124 07:09:15.100174 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l7k4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-4lmvf_openstack-operators(6f867475-7eee-431c-97ee-12ae861193c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:15 crc kubenswrapper[4675]: E0124 07:09:15.101235 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" podUID="6f867475-7eee-431c-97ee-12ae861193c7" Jan 24 07:09:16 crc kubenswrapper[4675]: E0124 07:09:16.082907 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" podUID="6f867475-7eee-431c-97ee-12ae861193c7" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.182353 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.188977 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.292376 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jx2pj" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.300938 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.688588 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.688630 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.692302 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.695568 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.893406 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nv27l" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.901387 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:24 crc kubenswrapper[4675]: I0124 07:09:24.207118 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-c5658"] Jan 24 07:09:24 crc kubenswrapper[4675]: E0124 07:09:24.309153 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 24 07:09:24 crc kubenswrapper[4675]: E0124 07:09:24.309329 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tkrf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9cmpf_openstack-operators(b7d1f492-700c-492e-a1c2-eae496f0133c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:24 crc kubenswrapper[4675]: E0124 07:09:24.310448 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" podUID="b7d1f492-700c-492e-a1c2-eae496f0133c" Jan 24 07:09:24 crc kubenswrapper[4675]: I0124 07:09:24.623403 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7"] Jan 24 07:09:24 crc kubenswrapper[4675]: W0124 07:09:24.651017 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd94b056e_c445_4033_8d02_a794dae4b671.slice/crio-9a6a680419924b53ec911b1861b4a7af348a32eb50ff1c46774d13c781f0cb36 WatchSource:0}: Error finding container 9a6a680419924b53ec911b1861b4a7af348a32eb50ff1c46774d13c781f0cb36: Status 404 returned error can't find the container with id 9a6a680419924b53ec911b1861b4a7af348a32eb50ff1c46774d13c781f0cb36 Jan 24 07:09:24 crc kubenswrapper[4675]: I0124 07:09:24.869813 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk"] Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.165303 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" event={"ID":"724ac56d-9f4e-40f9-98f7-3a65c807f89c","Type":"ContainerStarted","Data":"82f1f45085e218f1f55a9136e3254ccdc1fa211287ae233a8c55cac8dffefc53"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.166092 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.177312 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"ccc264da54b5f1cadbac5cdeddfb0468de5e9dc08fb8953998ed833d79a9f49c"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.182613 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" event={"ID":"06f423e8-7ba9-497d-a587-cc880d66625b","Type":"ContainerStarted","Data":"56c1fd10638f29dafeb27ffa7115cdeb003a7359ee1ed235b339f0179c0d35cc"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.183210 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.191480 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" event={"ID":"f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480","Type":"ContainerStarted","Data":"4c92742f8a5e535bc6b5f5904e7da02110afdab55404e255c0f6ea8cc39c7423"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.192196 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.197470 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" event={"ID":"b8285f65-9930-4bb9-9e18-b6ffe19f45fb","Type":"ContainerStarted","Data":"339be70683573a1353b9db72eee041412f640bbf2cc7893012895c264456fba1"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.197991 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.198786 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" event={"ID":"ac97fbc7-211e-41e3-8e16-aff853a7c9f4","Type":"ContainerStarted","Data":"2068c809a181525b69fe05d3809785c8a1eec718c2185a28899d2b3b8b803b19"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.199462 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" event={"ID":"743af71f-3542-439c-b3a1-33a7b9ae34f1","Type":"ContainerStarted","Data":"ca040ce6c73ff89cb1d8cfda66434cdcc02438701f3d9eca0d03356fe8eb4802"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.200435 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" event={"ID":"4aa5aa88-c6f2-4000-9a9d-3b14e23220de","Type":"ContainerStarted","Data":"d49831cb2e97f93fa52d785421513bd3f869a61da647ab699136fbb0a62dfb1a"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.200795 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.217283 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" event={"ID":"e7263d16-14c3-4254-821a-cbf99b7cf3e4","Type":"ContainerStarted","Data":"c51f3414571300129f4bfcc079c05de17dda99f9d8d9cd466fb10edcbcd997bf"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.217897 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.225262 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" podStartSLOduration=4.782385576 podStartE2EDuration="38.225244253s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.333133516 +0000 UTC m=+931.629238739" lastFinishedPulling="2026-01-24 07:09:23.775992173 +0000 UTC m=+965.072097416" observedRunningTime="2026-01-24 07:09:25.215293184 +0000 UTC m=+966.511398417" watchObservedRunningTime="2026-01-24 07:09:25.225244253 +0000 UTC m=+966.521349476" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.229929 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" event={"ID":"d94b056e-c445-4033-8d02-a794dae4b671","Type":"ContainerStarted","Data":"9a6a680419924b53ec911b1861b4a7af348a32eb50ff1c46774d13c781f0cb36"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.245206 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" event={"ID":"7660e41e-527d-4806-8ef3-6dee25fa72c5","Type":"ContainerStarted","Data":"944fea83c7bc11a1e40d8b451a4db69c7df772b2ac9279d851cb339d898de736"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.245854 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.250675 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" event={"ID":"e09ce8a8-a2a4-4fec-b36d-a97910aced0f","Type":"ContainerStarted","Data":"3cbf50aa875898845a1c79a07a81ec6d89cf719b1cf951c5d499758f66c75902"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.250997 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.266934 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" event={"ID":"20b0ee18-4569-4428-956f-d8795904f368","Type":"ContainerStarted","Data":"3eddbcb3815937735fbb8f61184b590439d3e5fa93b1b7a3b5bb4c0f9ad7fc72"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.267585 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.276232 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" podStartSLOduration=12.959704097 podStartE2EDuration="38.276214071s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:49.754156595 +0000 UTC m=+931.050261818" lastFinishedPulling="2026-01-24 07:09:15.070666569 +0000 UTC m=+956.366771792" observedRunningTime="2026-01-24 07:09:25.264927699 +0000 UTC m=+966.561032922" watchObservedRunningTime="2026-01-24 07:09:25.276214071 +0000 UTC m=+966.572319294" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.295422 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" podStartSLOduration=12.351475161 podStartE2EDuration="38.295406824s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.048923347 +0000 UTC m=+931.345028570" lastFinishedPulling="2026-01-24 07:09:15.99285501 +0000 UTC m=+957.288960233" observedRunningTime="2026-01-24 07:09:25.293712333 +0000 UTC m=+966.589817556" watchObservedRunningTime="2026-01-24 07:09:25.295406824 +0000 UTC m=+966.591512047" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.324980 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" podStartSLOduration=3.907059319 podStartE2EDuration="37.324963706s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.358357033 +0000 UTC m=+931.654462256" lastFinishedPulling="2026-01-24 07:09:23.77626138 +0000 UTC m=+965.072366643" observedRunningTime="2026-01-24 07:09:25.322110557 +0000 UTC m=+966.618215780" watchObservedRunningTime="2026-01-24 07:09:25.324963706 +0000 UTC m=+966.621068929" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.353494 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" podStartSLOduration=12.979057794 podStartE2EDuration="38.353476293s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:49.696168978 +0000 UTC m=+930.992274201" lastFinishedPulling="2026-01-24 07:09:15.070587477 +0000 UTC m=+956.366692700" observedRunningTime="2026-01-24 07:09:25.349880387 +0000 UTC m=+966.645985610" watchObservedRunningTime="2026-01-24 07:09:25.353476293 +0000 UTC m=+966.649581516" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.386161 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" podStartSLOduration=12.440857325 podStartE2EDuration="38.38613859s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.048975549 +0000 UTC m=+931.345080772" lastFinishedPulling="2026-01-24 07:09:15.994256814 +0000 UTC m=+957.290362037" observedRunningTime="2026-01-24 07:09:25.382098383 +0000 UTC m=+966.678203596" watchObservedRunningTime="2026-01-24 07:09:25.38613859 +0000 UTC m=+966.682243823" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.407164 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" podStartSLOduration=3.979516307 podStartE2EDuration="37.407151187s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.347352329 +0000 UTC m=+931.643457552" lastFinishedPulling="2026-01-24 07:09:23.774987209 +0000 UTC m=+965.071092432" observedRunningTime="2026-01-24 07:09:25.402068494 +0000 UTC m=+966.698173717" watchObservedRunningTime="2026-01-24 07:09:25.407151187 +0000 UTC m=+966.703256400" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.422867 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" podStartSLOduration=4.093468127 podStartE2EDuration="38.422850475s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.074847452 +0000 UTC m=+931.370952675" lastFinishedPulling="2026-01-24 07:09:24.4042298 +0000 UTC m=+965.700335023" observedRunningTime="2026-01-24 07:09:25.422363373 +0000 UTC m=+966.718468596" watchObservedRunningTime="2026-01-24 07:09:25.422850475 +0000 UTC m=+966.718955688" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.443787 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" podStartSLOduration=13.642949321 podStartE2EDuration="38.443767049s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:49.271482934 +0000 UTC m=+930.567588157" lastFinishedPulling="2026-01-24 07:09:14.072300662 +0000 UTC m=+955.368405885" observedRunningTime="2026-01-24 07:09:25.440211403 +0000 UTC m=+966.736316626" watchObservedRunningTime="2026-01-24 07:09:25.443767049 +0000 UTC m=+966.739872272" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.281008 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" event={"ID":"4bfb9011-058d-494d-96ce-a39202c7b851","Type":"ContainerStarted","Data":"dbfbe8ce8a09ab164a80f34f2d0aee9706c3fd8e799a3beb69f09761934a1fe1"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.282145 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.297375 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" event={"ID":"7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320","Type":"ContainerStarted","Data":"99c5a18fd4b502378214dd0e94d6b9430cd885e614e1a777f5570eb696e8cece"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.325536 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" event={"ID":"bdc167a3-9335-4b3d-9696-a1d03b9ae618","Type":"ContainerStarted","Data":"1a2655619446c03c92907cb0e7f8741bdc37601beb16151b18dae586cf8e6643"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.325859 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.339652 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" event={"ID":"2db25911-f36e-43ae-8f47-b042ec82266e","Type":"ContainerStarted","Data":"ed20e3deaf284e65bf65231423658cb6b0d9aaa0b963876f87e5e4d33be9c3df"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.340311 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.356183 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" event={"ID":"d94b056e-c445-4033-8d02-a794dae4b671","Type":"ContainerStarted","Data":"6912401445e9fb503c6e8d10bebbcf087b09deef2b9aac79008abfb54513abf7"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.356873 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.369154 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" event={"ID":"fae349a1-6c08-4424-abe2-42dddccd55cc","Type":"ContainerStarted","Data":"004bed238ea54f9c76a13b219ef4453dd5a7393199c83db5a016e5dc674b8944"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.369435 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.379537 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" podStartSLOduration=3.98592147 podStartE2EDuration="38.379520426s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.048544278 +0000 UTC m=+931.344649541" lastFinishedPulling="2026-01-24 07:09:24.442143274 +0000 UTC m=+965.738248497" observedRunningTime="2026-01-24 07:09:26.333103258 +0000 UTC m=+967.629208481" watchObservedRunningTime="2026-01-24 07:09:26.379520426 +0000 UTC m=+967.675625649" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.397882 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" event={"ID":"a1041f21-5d7d-4b17-84ff-ee83332e604d","Type":"ContainerStarted","Data":"ebd8d51b294de492fe33747f19e9ad0bda97bcb927f4ce46858fa17ce1238191"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.398550 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.415773 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" event={"ID":"47e89f8e-f652-43a1-a36a-2db184700f3e","Type":"ContainerStarted","Data":"b458189190653b942d980218eda463e4df9fd758369cbaefd7a1dd5139c89606"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.497331 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" podStartSLOduration=5.383797908 podStartE2EDuration="39.497315055s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.328610377 +0000 UTC m=+931.624715600" lastFinishedPulling="2026-01-24 07:09:24.442127504 +0000 UTC m=+965.738232747" observedRunningTime="2026-01-24 07:09:26.3899984 +0000 UTC m=+967.686103623" watchObservedRunningTime="2026-01-24 07:09:26.497315055 +0000 UTC m=+967.793420278" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.519797 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" podStartSLOduration=38.519779246 podStartE2EDuration="38.519779246s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:09:26.516536578 +0000 UTC m=+967.812641801" watchObservedRunningTime="2026-01-24 07:09:26.519779246 +0000 UTC m=+967.815884469" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.560764 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" podStartSLOduration=4.637043891 podStartE2EDuration="38.560748994s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.364611675 +0000 UTC m=+931.660716898" lastFinishedPulling="2026-01-24 07:09:24.288316778 +0000 UTC m=+965.584422001" observedRunningTime="2026-01-24 07:09:26.559105925 +0000 UTC m=+967.855211148" watchObservedRunningTime="2026-01-24 07:09:26.560748994 +0000 UTC m=+967.856854217" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.624039 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" podStartSLOduration=4.945337743 podStartE2EDuration="39.624018048s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:49.705567944 +0000 UTC m=+931.001673167" lastFinishedPulling="2026-01-24 07:09:24.384248249 +0000 UTC m=+965.680353472" observedRunningTime="2026-01-24 07:09:26.623791793 +0000 UTC m=+967.919897016" watchObservedRunningTime="2026-01-24 07:09:26.624018048 +0000 UTC m=+967.920123271" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.678408 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" podStartSLOduration=4.642170715 podStartE2EDuration="38.678391159s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.335193116 +0000 UTC m=+931.631298339" lastFinishedPulling="2026-01-24 07:09:24.37141356 +0000 UTC m=+965.667518783" observedRunningTime="2026-01-24 07:09:26.677535788 +0000 UTC m=+967.973641011" watchObservedRunningTime="2026-01-24 07:09:26.678391159 +0000 UTC m=+967.974496372" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.682998 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" podStartSLOduration=5.381831992 podStartE2EDuration="39.682983569s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.102508229 +0000 UTC m=+931.398613452" lastFinishedPulling="2026-01-24 07:09:24.403659816 +0000 UTC m=+965.699765029" observedRunningTime="2026-01-24 07:09:26.649160494 +0000 UTC m=+967.945265717" watchObservedRunningTime="2026-01-24 07:09:26.682983569 +0000 UTC m=+967.979088792" Jan 24 07:09:27 crc kubenswrapper[4675]: I0124 07:09:27.427154 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" event={"ID":"6003a1f9-ad0e-49f6-8750-6ac2208560cc","Type":"ContainerStarted","Data":"5d48ea2e9f223b4595379807aaf5de239110db8ae85df11b2f2b31404cf510c2"} Jan 24 07:09:27 crc kubenswrapper[4675]: I0124 07:09:27.457137 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" podStartSLOduration=3.8030543 podStartE2EDuration="40.457122633s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:49.795527492 +0000 UTC m=+931.091632715" lastFinishedPulling="2026-01-24 07:09:26.449595825 +0000 UTC m=+967.745701048" observedRunningTime="2026-01-24 07:09:27.455262159 +0000 UTC m=+968.751367382" watchObservedRunningTime="2026-01-24 07:09:27.457122633 +0000 UTC m=+968.753227856" Jan 24 07:09:27 crc kubenswrapper[4675]: I0124 07:09:27.485226 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" podStartSLOduration=5.906924145 podStartE2EDuration="40.48520711s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:49.792869978 +0000 UTC m=+931.088975201" lastFinishedPulling="2026-01-24 07:09:24.371152943 +0000 UTC m=+965.667258166" observedRunningTime="2026-01-24 07:09:27.484172335 +0000 UTC m=+968.780277558" watchObservedRunningTime="2026-01-24 07:09:27.48520711 +0000 UTC m=+968.781312333" Jan 24 07:09:27 crc kubenswrapper[4675]: I0124 07:09:27.717709 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:09:27 crc kubenswrapper[4675]: I0124 07:09:27.916103 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:09:28 crc kubenswrapper[4675]: I0124 07:09:28.714103 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.107902 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.441469 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" event={"ID":"5b3a45f7-a1eb-44a2-b0be-7c77b190d50c","Type":"ContainerStarted","Data":"71fc1a9a6301112ffa0f12e7733faa4bb40b3dad23a7d52a697e21a85c6cbad3"} Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.441954 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.444122 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" event={"ID":"ac97fbc7-211e-41e3-8e16-aff853a7c9f4","Type":"ContainerStarted","Data":"5fbbaf84173124b47edf5d8fdb3463f5aef381f307479799985f207d6e958fde"} Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.444270 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.447300 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" event={"ID":"743af71f-3542-439c-b3a1-33a7b9ae34f1","Type":"ContainerStarted","Data":"9958b7b8c8a2f4a78e3c0a62b8589fba492cb8d6732d30733b03589531e57f21"} Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.448068 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.459787 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" podStartSLOduration=4.990410139 podStartE2EDuration="42.459768998s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.048732423 +0000 UTC m=+931.344837646" lastFinishedPulling="2026-01-24 07:09:27.518091282 +0000 UTC m=+968.814196505" observedRunningTime="2026-01-24 07:09:29.456674294 +0000 UTC m=+970.752779517" watchObservedRunningTime="2026-01-24 07:09:29.459768998 +0000 UTC m=+970.755874221" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.486155 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" podStartSLOduration=38.146044946 podStartE2EDuration="42.486136724s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:09:24.894521355 +0000 UTC m=+966.190626578" lastFinishedPulling="2026-01-24 07:09:29.234613123 +0000 UTC m=+970.530718356" observedRunningTime="2026-01-24 07:09:29.484250529 +0000 UTC m=+970.780355752" watchObservedRunningTime="2026-01-24 07:09:29.486136724 +0000 UTC m=+970.782241947" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.519759 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" podStartSLOduration=37.648835754 podStartE2EDuration="42.519737493s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:09:24.3585637 +0000 UTC m=+965.654668923" lastFinishedPulling="2026-01-24 07:09:29.229465429 +0000 UTC m=+970.525570662" observedRunningTime="2026-01-24 07:09:29.512400507 +0000 UTC m=+970.808505740" watchObservedRunningTime="2026-01-24 07:09:29.519737493 +0000 UTC m=+970.815842716" Jan 24 07:09:30 crc kubenswrapper[4675]: I0124 07:09:30.457166 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" event={"ID":"6f867475-7eee-431c-97ee-12ae861193c7","Type":"ContainerStarted","Data":"382c4f4579727e4cac0d479ad9cabb3765a193fabf8900f2b0f0c8e80071354e"} Jan 24 07:09:30 crc kubenswrapper[4675]: I0124 07:09:30.458367 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:09:30 crc kubenswrapper[4675]: I0124 07:09:30.471563 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" podStartSLOduration=4.098827885 podStartE2EDuration="43.471541948s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.048454456 +0000 UTC m=+931.344559679" lastFinishedPulling="2026-01-24 07:09:29.421168519 +0000 UTC m=+970.717273742" observedRunningTime="2026-01-24 07:09:30.468237978 +0000 UTC m=+971.764343201" watchObservedRunningTime="2026-01-24 07:09:30.471541948 +0000 UTC m=+971.767647171" Jan 24 07:09:30 crc kubenswrapper[4675]: I0124 07:09:30.907411 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:37 crc kubenswrapper[4675]: I0124 07:09:37.645800 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:09:37 crc kubenswrapper[4675]: I0124 07:09:37.702346 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:09:37 crc kubenswrapper[4675]: I0124 07:09:37.727898 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:09:37 crc kubenswrapper[4675]: I0124 07:09:37.787263 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:09:37 crc kubenswrapper[4675]: I0124 07:09:37.921705 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:09:37 crc kubenswrapper[4675]: I0124 07:09:37.946579 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.049502 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.115409 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.137224 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.218301 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.385789 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.418707 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.461573 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.474342 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.559662 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.587207 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.718666 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:09:38 crc kubenswrapper[4675]: E0124 07:09:38.948366 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" podUID="b7d1f492-700c-492e-a1c2-eae496f0133c" Jan 24 07:09:39 crc kubenswrapper[4675]: I0124 07:09:39.012076 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:09:40 crc kubenswrapper[4675]: I0124 07:09:40.306911 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:09:43 crc kubenswrapper[4675]: I0124 07:09:43.856291 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:09:53 crc kubenswrapper[4675]: I0124 07:09:53.946126 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:09:55 crc kubenswrapper[4675]: I0124 07:09:55.685451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" event={"ID":"b7d1f492-700c-492e-a1c2-eae496f0133c","Type":"ContainerStarted","Data":"c00a6ff9bfac024388ab78c52a578b491b454e5daa5dda8aea6a75bab92e39c9"} Jan 24 07:09:55 crc kubenswrapper[4675]: I0124 07:09:55.699030 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" podStartSLOduration=3.562719721 podStartE2EDuration="1m7.699007753s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.348362423 +0000 UTC m=+931.644467646" lastFinishedPulling="2026-01-24 07:09:54.484650455 +0000 UTC m=+995.780755678" observedRunningTime="2026-01-24 07:09:55.697523607 +0000 UTC m=+996.993628830" watchObservedRunningTime="2026-01-24 07:09:55.699007753 +0000 UTC m=+996.995112976" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.835292 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rxp64"] Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.837135 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.839253 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.839314 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.839855 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.840077 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.841030 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-9rr77" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.846989 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rxp64"] Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.006834 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-config\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.006875 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.006929 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gplkx\" (UniqueName: \"kubernetes.io/projected/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-kube-api-access-gplkx\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.108634 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gplkx\" (UniqueName: \"kubernetes.io/projected/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-kube-api-access-gplkx\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.108849 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-config\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.109570 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.110286 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-config\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.111180 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.130988 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gplkx\" (UniqueName: \"kubernetes.io/projected/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-kube-api-access-gplkx\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.158088 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: W0124 07:10:11.592117 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46a81ceb_1fea_49de_8e4a_3f4b1dabaa55.slice/crio-900476cefd33cf6c819789d882d8f3094b60f9c805b28b72e9fc9d1ff3e62385 WatchSource:0}: Error finding container 900476cefd33cf6c819789d882d8f3094b60f9c805b28b72e9fc9d1ff3e62385: Status 404 returned error can't find the container with id 900476cefd33cf6c819789d882d8f3094b60f9c805b28b72e9fc9d1ff3e62385 Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.593588 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rxp64"] Jan 24 07:10:12 crc kubenswrapper[4675]: I0124 07:10:12.169856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" event={"ID":"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55","Type":"ContainerStarted","Data":"900476cefd33cf6c819789d882d8f3094b60f9c805b28b72e9fc9d1ff3e62385"} Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.607224 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rrp8m"] Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.617922 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.626358 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rrp8m"] Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.748735 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.748805 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwc6h\" (UniqueName: \"kubernetes.io/projected/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-kube-api-access-qwc6h\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.748849 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-config\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.850474 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwc6h\" (UniqueName: \"kubernetes.io/projected/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-kube-api-access-qwc6h\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.850537 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-config\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.850583 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.852038 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-config\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.852072 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.886227 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwc6h\" (UniqueName: \"kubernetes.io/projected/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-kube-api-access-qwc6h\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.934082 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.982457 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rxp64"] Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.021558 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c9krd"] Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.023166 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.034039 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c9krd"] Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.168683 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.169077 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-config\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.169110 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msf89\" (UniqueName: \"kubernetes.io/projected/3186ca49-238e-418a-95e7-f857a9f3bd75-kube-api-access-msf89\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.271316 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-config\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.272499 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-config\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.271380 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msf89\" (UniqueName: \"kubernetes.io/projected/3186ca49-238e-418a-95e7-f857a9f3bd75-kube-api-access-msf89\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.272849 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.273462 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.292029 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msf89\" (UniqueName: \"kubernetes.io/projected/3186ca49-238e-418a-95e7-f857a9f3bd75-kube-api-access-msf89\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.346113 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.489209 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rrp8m"] Jan 24 07:10:14 crc kubenswrapper[4675]: W0124 07:10:14.505827 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod347ffd58_e301_4dd3_9416_2d6fa5ffdaa7.slice/crio-23bfc54e831fd43388b201fd9b3294d34b2fc6a04e54c0d273f63f0d56af9242 WatchSource:0}: Error finding container 23bfc54e831fd43388b201fd9b3294d34b2fc6a04e54c0d273f63f0d56af9242: Status 404 returned error can't find the container with id 23bfc54e831fd43388b201fd9b3294d34b2fc6a04e54c0d273f63f0d56af9242 Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.781016 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c9krd"] Jan 24 07:10:14 crc kubenswrapper[4675]: W0124 07:10:14.789205 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3186ca49_238e_418a_95e7_f857a9f3bd75.slice/crio-68e5c9046a1ff1a25c54ddb1f4fe8acfd229b68840f96e465631f88826708436 WatchSource:0}: Error finding container 68e5c9046a1ff1a25c54ddb1f4fe8acfd229b68840f96e465631f88826708436: Status 404 returned error can't find the container with id 68e5c9046a1ff1a25c54ddb1f4fe8acfd229b68840f96e465631f88826708436 Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.802328 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.803545 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.808240 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nnfwj" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.808648 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.808846 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.808255 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.808269 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.808326 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.809556 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.816707 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984590 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984643 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984733 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984761 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984783 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984810 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984835 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qg6z\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-kube-api-access-2qg6z\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.985426 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.985460 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.985492 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.985516 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-config-data\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089160 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089209 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089227 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089258 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089275 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qg6z\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-kube-api-access-2qg6z\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089293 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089318 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089360 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089378 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-config-data\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089432 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089451 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089907 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.090200 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.090532 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-config-data\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.103971 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.107527 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.109833 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.110693 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.111628 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.114130 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.115692 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.117427 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qg6z\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-kube-api-access-2qg6z\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.126277 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.130460 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.170600 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.194322 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.194800 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.204505 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.204842 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.205011 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.205060 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.205089 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.205285 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.205369 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bt874" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.227595 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" event={"ID":"3186ca49-238e-418a-95e7-f857a9f3bd75","Type":"ContainerStarted","Data":"68e5c9046a1ff1a25c54ddb1f4fe8acfd229b68840f96e465631f88826708436"} Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.235697 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" event={"ID":"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7","Type":"ContainerStarted","Data":"23bfc54e831fd43388b201fd9b3294d34b2fc6a04e54c0d273f63f0d56af9242"} Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.293685 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.293779 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chsxm\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-kube-api-access-chsxm\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.293846 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.293876 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.293939 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.293995 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.294022 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.294090 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.294116 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.294181 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.294227 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396457 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396492 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396528 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396551 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396568 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396601 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chsxm\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-kube-api-access-chsxm\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396623 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396641 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396671 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396697 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396712 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.397129 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.397694 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.400282 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.403590 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.404141 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.404531 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.411813 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.412293 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.412771 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.418668 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.436590 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chsxm\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-kube-api-access-chsxm\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.465397 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.530215 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.332695 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.333991 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.341101 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.341345 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6lg82" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.341659 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.341822 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.342470 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.349668 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411077 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/009254f3-9d76-4d89-8e35-d2b4c4be0da8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411145 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-config-data-default\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411224 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411254 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009254f3-9d76-4d89-8e35-d2b4c4be0da8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411303 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdbsr\" (UniqueName: \"kubernetes.io/projected/009254f3-9d76-4d89-8e35-d2b4c4be0da8-kube-api-access-rdbsr\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411344 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/009254f3-9d76-4d89-8e35-d2b4c4be0da8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411386 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411415 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-kolla-config\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512155 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/009254f3-9d76-4d89-8e35-d2b4c4be0da8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512201 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-config-data-default\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512246 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512269 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009254f3-9d76-4d89-8e35-d2b4c4be0da8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512296 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdbsr\" (UniqueName: \"kubernetes.io/projected/009254f3-9d76-4d89-8e35-d2b4c4be0da8-kube-api-access-rdbsr\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512321 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/009254f3-9d76-4d89-8e35-d2b4c4be0da8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512347 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512368 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-kolla-config\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.513038 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-kolla-config\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.513694 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.513766 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/009254f3-9d76-4d89-8e35-d2b4c4be0da8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.514381 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-config-data-default\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.515051 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.517339 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009254f3-9d76-4d89-8e35-d2b4c4be0da8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.534383 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/009254f3-9d76-4d89-8e35-d2b4c4be0da8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.538240 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdbsr\" (UniqueName: \"kubernetes.io/projected/009254f3-9d76-4d89-8e35-d2b4c4be0da8-kube-api-access-rdbsr\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.548377 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.662393 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.558965 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.560182 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.562939 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-wv6lc" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.563106 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.563313 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.572835 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.587963 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630000 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e189b411-9dd6-496f-a001-41bc90c3fe00-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630212 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630270 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e189b411-9dd6-496f-a001-41bc90c3fe00-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630333 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e189b411-9dd6-496f-a001-41bc90c3fe00-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630410 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6qfc\" (UniqueName: \"kubernetes.io/projected/e189b411-9dd6-496f-a001-41bc90c3fe00-kube-api-access-b6qfc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630461 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630595 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630706 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732296 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732362 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e189b411-9dd6-496f-a001-41bc90c3fe00-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732472 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732498 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e189b411-9dd6-496f-a001-41bc90c3fe00-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732547 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e189b411-9dd6-496f-a001-41bc90c3fe00-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732614 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6qfc\" (UniqueName: \"kubernetes.io/projected/e189b411-9dd6-496f-a001-41bc90c3fe00-kube-api-access-b6qfc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732647 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732678 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732944 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.733738 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.733907 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.734245 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e189b411-9dd6-496f-a001-41bc90c3fe00-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.735102 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.736803 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e189b411-9dd6-496f-a001-41bc90c3fe00-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.737097 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e189b411-9dd6-496f-a001-41bc90c3fe00-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.761203 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6qfc\" (UniqueName: \"kubernetes.io/projected/e189b411-9dd6-496f-a001-41bc90c3fe00-kube-api-access-b6qfc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.787318 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.884472 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.992247 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.993208 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.996800 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-r96rq" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.997129 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.999235 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.008629 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.137712 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnkrr\" (UniqueName: \"kubernetes.io/projected/b2446e52-3d97-46f2-ac99-4bb1af82d302-kube-api-access-rnkrr\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.138073 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2446e52-3d97-46f2-ac99-4bb1af82d302-kolla-config\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.138251 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b2446e52-3d97-46f2-ac99-4bb1af82d302-config-data\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.138420 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2446e52-3d97-46f2-ac99-4bb1af82d302-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.138563 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2446e52-3d97-46f2-ac99-4bb1af82d302-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.240312 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b2446e52-3d97-46f2-ac99-4bb1af82d302-config-data\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.240381 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2446e52-3d97-46f2-ac99-4bb1af82d302-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.240409 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2446e52-3d97-46f2-ac99-4bb1af82d302-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.240437 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnkrr\" (UniqueName: \"kubernetes.io/projected/b2446e52-3d97-46f2-ac99-4bb1af82d302-kube-api-access-rnkrr\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.240462 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2446e52-3d97-46f2-ac99-4bb1af82d302-kolla-config\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.241183 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2446e52-3d97-46f2-ac99-4bb1af82d302-kolla-config\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.241500 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b2446e52-3d97-46f2-ac99-4bb1af82d302-config-data\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.243849 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2446e52-3d97-46f2-ac99-4bb1af82d302-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.244512 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2446e52-3d97-46f2-ac99-4bb1af82d302-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.279280 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnkrr\" (UniqueName: \"kubernetes.io/projected/b2446e52-3d97-46f2-ac99-4bb1af82d302-kube-api-access-rnkrr\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.310337 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 24 07:10:19 crc kubenswrapper[4675]: I0124 07:10:19.847256 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:10:19 crc kubenswrapper[4675]: I0124 07:10:19.849276 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:10:19 crc kubenswrapper[4675]: I0124 07:10:19.851785 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vf6k6" Jan 24 07:10:19 crc kubenswrapper[4675]: I0124 07:10:19.869265 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:10:19 crc kubenswrapper[4675]: I0124 07:10:19.988548 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q67r2\" (UniqueName: \"kubernetes.io/projected/740dfadf-4d28-4f03-ab2c-cf51c7e078bf-kube-api-access-q67r2\") pod \"kube-state-metrics-0\" (UID: \"740dfadf-4d28-4f03-ab2c-cf51c7e078bf\") " pod="openstack/kube-state-metrics-0" Jan 24 07:10:20 crc kubenswrapper[4675]: I0124 07:10:20.090443 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q67r2\" (UniqueName: \"kubernetes.io/projected/740dfadf-4d28-4f03-ab2c-cf51c7e078bf-kube-api-access-q67r2\") pod \"kube-state-metrics-0\" (UID: \"740dfadf-4d28-4f03-ab2c-cf51c7e078bf\") " pod="openstack/kube-state-metrics-0" Jan 24 07:10:20 crc kubenswrapper[4675]: I0124 07:10:20.114065 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q67r2\" (UniqueName: \"kubernetes.io/projected/740dfadf-4d28-4f03-ab2c-cf51c7e078bf-kube-api-access-q67r2\") pod \"kube-state-metrics-0\" (UID: \"740dfadf-4d28-4f03-ab2c-cf51c7e078bf\") " pod="openstack/kube-state-metrics-0" Jan 24 07:10:20 crc kubenswrapper[4675]: I0124 07:10:20.170953 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.148799 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2x2kb"] Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.150454 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.153264 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.153455 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-x9sdf" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.153592 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.167795 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2x2kb"] Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.200824 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-fsln2"] Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.202321 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.255867 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-combined-ca-bundle\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.255927 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-log-ovn\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.255950 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-scripts\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.255974 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-run\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.256037 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-run-ovn\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.256085 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kbww\" (UniqueName: \"kubernetes.io/projected/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-kube-api-access-8kbww\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.256104 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-ovn-controller-tls-certs\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.288072 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fsln2"] Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.356991 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-run\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357275 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-run-ovn\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357326 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-lib\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357356 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kbww\" (UniqueName: \"kubernetes.io/projected/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-kube-api-access-8kbww\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357375 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-ovn-controller-tls-certs\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357402 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/feda0648-be0d-4fb4-a3a4-42440e47fec0-scripts\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357426 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-log\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357449 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-etc-ovs\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357465 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-combined-ca-bundle\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357481 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-run\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357508 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-log-ovn\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357526 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-scripts\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357547 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g468d\" (UniqueName: \"kubernetes.io/projected/feda0648-be0d-4fb4-a3a4-42440e47fec0-kube-api-access-g468d\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357637 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-run-ovn\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.358539 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-run\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.358754 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-log-ovn\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.363418 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-ovn-controller-tls-certs\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.369500 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-scripts\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.379287 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-combined-ca-bundle\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.393509 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kbww\" (UniqueName: \"kubernetes.io/projected/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-kube-api-access-8kbww\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459514 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-lib\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459595 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/feda0648-be0d-4fb4-a3a4-42440e47fec0-scripts\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459660 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-log\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459684 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-etc-ovs\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459736 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-run\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459768 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g468d\" (UniqueName: \"kubernetes.io/projected/feda0648-be0d-4fb4-a3a4-42440e47fec0-kube-api-access-g468d\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459934 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-log\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459996 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-run\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.460071 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-etc-ovs\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.460289 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-lib\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.462243 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/feda0648-be0d-4fb4-a3a4-42440e47fec0-scripts\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.478703 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g468d\" (UniqueName: \"kubernetes.io/projected/feda0648-be0d-4fb4-a3a4-42440e47fec0-kube-api-access-g468d\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.546409 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.573337 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.751701 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.047402 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.049001 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.051908 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-g8snx" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.052181 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.052343 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.054654 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.054836 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.054911 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221008 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221203 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwm66\" (UniqueName: \"kubernetes.io/projected/19fa54da-8a94-427d-b8c6-0881657d3324-kube-api-access-lwm66\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221310 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fa54da-8a94-427d-b8c6-0881657d3324-config\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221360 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221399 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19fa54da-8a94-427d-b8c6-0881657d3324-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221414 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221618 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221678 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19fa54da-8a94-427d-b8c6-0881657d3324-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323682 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fa54da-8a94-427d-b8c6-0881657d3324-config\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323767 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323833 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19fa54da-8a94-427d-b8c6-0881657d3324-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323850 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323911 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323936 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19fa54da-8a94-427d-b8c6-0881657d3324-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323956 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.324008 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwm66\" (UniqueName: \"kubernetes.io/projected/19fa54da-8a94-427d-b8c6-0881657d3324-kube-api-access-lwm66\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.324755 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.325340 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19fa54da-8a94-427d-b8c6-0881657d3324-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.326234 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19fa54da-8a94-427d-b8c6-0881657d3324-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.327540 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fa54da-8a94-427d-b8c6-0881657d3324-config\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.330753 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.338776 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.339936 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwm66\" (UniqueName: \"kubernetes.io/projected/19fa54da-8a94-427d-b8c6-0881657d3324-kube-api-access-lwm66\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.351074 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.354554 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.379469 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.927899 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.932132 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.935799 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.935974 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.936085 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.936192 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xzx52" Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.941408 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.072794 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw2w4\" (UniqueName: \"kubernetes.io/projected/f1d973fa-2671-49fe-82f1-1862aa70d784-kube-api-access-cw2w4\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.072863 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.072902 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.073085 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d973fa-2671-49fe-82f1-1862aa70d784-config\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.073180 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1d973fa-2671-49fe-82f1-1862aa70d784-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.073277 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1d973fa-2671-49fe-82f1-1862aa70d784-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.073303 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.073335 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174165 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1d973fa-2671-49fe-82f1-1862aa70d784-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174236 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1d973fa-2671-49fe-82f1-1862aa70d784-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174256 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174274 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174313 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw2w4\" (UniqueName: \"kubernetes.io/projected/f1d973fa-2671-49fe-82f1-1862aa70d784-kube-api-access-cw2w4\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174340 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174360 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174393 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d973fa-2671-49fe-82f1-1862aa70d784-config\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.175208 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d973fa-2671-49fe-82f1-1862aa70d784-config\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.176297 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1d973fa-2671-49fe-82f1-1862aa70d784-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.176610 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1d973fa-2671-49fe-82f1-1862aa70d784-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.177932 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.180825 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.181587 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.184483 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.191827 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw2w4\" (UniqueName: \"kubernetes.io/projected/f1d973fa-2671-49fe-82f1-1862aa70d784-kube-api-access-cw2w4\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.202176 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.286349 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:29 crc kubenswrapper[4675]: I0124 07:10:29.364511 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:10:29 crc kubenswrapper[4675]: I0124 07:10:29.390531 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b2446e52-3d97-46f2-ac99-4bb1af82d302","Type":"ContainerStarted","Data":"1e1711dc3ff8bfb93fc2b85d68f325dfc25c3f1c6209bea8ccb1f666cf326082"} Jan 24 07:10:30 crc kubenswrapper[4675]: W0124 07:10:30.137562 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-dc6f572e1e59798884630905a0aa55c9e501f7fef5df41864f737d4d70bd2321 WatchSource:0}: Error finding container dc6f572e1e59798884630905a0aa55c9e501f7fef5df41864f737d4d70bd2321: Status 404 returned error can't find the container with id dc6f572e1e59798884630905a0aa55c9e501f7fef5df41864f737d4d70bd2321 Jan 24 07:10:30 crc kubenswrapper[4675]: E0124 07:10:30.193749 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 24 07:10:30 crc kubenswrapper[4675]: E0124 07:10:30.194462 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gplkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-rxp64_openstack(46a81ceb-1fea-49de-8e4a-3f4b1dabaa55): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:10:30 crc kubenswrapper[4675]: E0124 07:10:30.195681 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" podUID="46a81ceb-1fea-49de-8e4a-3f4b1dabaa55" Jan 24 07:10:30 crc kubenswrapper[4675]: I0124 07:10:30.413836 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c","Type":"ContainerStarted","Data":"dc6f572e1e59798884630905a0aa55c9e501f7fef5df41864f737d4d70bd2321"} Jan 24 07:10:30 crc kubenswrapper[4675]: I0124 07:10:30.667079 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:10:30 crc kubenswrapper[4675]: I0124 07:10:30.765589 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 24 07:10:30 crc kubenswrapper[4675]: I0124 07:10:30.897977 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.034581 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.073029 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.149753 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-config\") pod \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.149862 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-dns-svc\") pod \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.150380 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gplkx\" (UniqueName: \"kubernetes.io/projected/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-kube-api-access-gplkx\") pod \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.151491 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46a81ceb-1fea-49de-8e4a-3f4b1dabaa55" (UID: "46a81ceb-1fea-49de-8e4a-3f4b1dabaa55"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.152031 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-config" (OuterVolumeSpecName: "config") pod "46a81ceb-1fea-49de-8e4a-3f4b1dabaa55" (UID: "46a81ceb-1fea-49de-8e4a-3f4b1dabaa55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.155276 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-kube-api-access-gplkx" (OuterVolumeSpecName: "kube-api-access-gplkx") pod "46a81ceb-1fea-49de-8e4a-3f4b1dabaa55" (UID: "46a81ceb-1fea-49de-8e4a-3f4b1dabaa55"). InnerVolumeSpecName "kube-api-access-gplkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.199554 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.231739 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2x2kb"] Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.251835 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gplkx\" (UniqueName: \"kubernetes.io/projected/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-kube-api-access-gplkx\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.251859 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.251873 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.396455 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fsln2"] Jan 24 07:10:31 crc kubenswrapper[4675]: W0124 07:10:31.422655 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeda0648_be0d_4fb4_a3a4_42440e47fec0.slice/crio-2ff49d9de1968ffa7aedcd9ff18b5ac08e750b31d74923e74433daf307de269d WatchSource:0}: Error finding container 2ff49d9de1968ffa7aedcd9ff18b5ac08e750b31d74923e74433daf307de269d: Status 404 returned error can't find the container with id 2ff49d9de1968ffa7aedcd9ff18b5ac08e750b31d74923e74433daf307de269d Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.424685 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e189b411-9dd6-496f-a001-41bc90c3fe00","Type":"ContainerStarted","Data":"fa3a7f0d316a1089a5558c3fea3965d7f92601c58852075ba23fbeab293e2591"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.426605 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"740dfadf-4d28-4f03-ab2c-cf51c7e078bf","Type":"ContainerStarted","Data":"8fc4ca63f03726f8d4f4612fb16075bb874d642ea255530f0cde869af0c01186"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.431432 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2x2kb" event={"ID":"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1","Type":"ContainerStarted","Data":"f4a06e2d0cf6671e04d0709ab0b6d54aed75cf01a0352517cd191de335ae03ed"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.443308 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"009254f3-9d76-4d89-8e35-d2b4c4be0da8","Type":"ContainerStarted","Data":"740cddf5546043e34032d9d1e0dcd3e121c3fb18c86f2b4c15c0e929ce705afd"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.447070 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f1d973fa-2671-49fe-82f1-1862aa70d784","Type":"ContainerStarted","Data":"3811e761deccebcd7be2da12e763920224c9f8202de3108beef61aea6223d4ec"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.448955 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ed4c9b-a365-46aa-95d7-7be5d2cc354a","Type":"ContainerStarted","Data":"e02cfc39376a20ed79af6aa4a70a95d12cb107645ef263fc4bfe2732893da583"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.451027 4675 generic.go:334] "Generic (PLEG): container finished" podID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerID="4fd0f48bc136df29146a9e239c77e392eeb5ff8cf314ea027a498f9dbf5099cb" exitCode=0 Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.451060 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" event={"ID":"3186ca49-238e-418a-95e7-f857a9f3bd75","Type":"ContainerDied","Data":"4fd0f48bc136df29146a9e239c77e392eeb5ff8cf314ea027a498f9dbf5099cb"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.483623 4675 generic.go:334] "Generic (PLEG): container finished" podID="347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" containerID="db00382708b7c809b6812f592aaa217f75f3715a6895df49399a3befed2578cd" exitCode=0 Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.483716 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" event={"ID":"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7","Type":"ContainerDied","Data":"db00382708b7c809b6812f592aaa217f75f3715a6895df49399a3befed2578cd"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.488427 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" event={"ID":"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55","Type":"ContainerDied","Data":"900476cefd33cf6c819789d882d8f3094b60f9c805b28b72e9fc9d1ff3e62385"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.488529 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.585935 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rxp64"] Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.590579 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rxp64"] Jan 24 07:10:32 crc kubenswrapper[4675]: I0124 07:10:32.139155 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 24 07:10:32 crc kubenswrapper[4675]: I0124 07:10:32.498584 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fsln2" event={"ID":"feda0648-be0d-4fb4-a3a4-42440e47fec0","Type":"ContainerStarted","Data":"2ff49d9de1968ffa7aedcd9ff18b5ac08e750b31d74923e74433daf307de269d"} Jan 24 07:10:32 crc kubenswrapper[4675]: I0124 07:10:32.501343 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" event={"ID":"3186ca49-238e-418a-95e7-f857a9f3bd75","Type":"ContainerStarted","Data":"f8e3e1f6d660872a4660f41b43fb3388701507fbf656a5b98d821dd5726064e7"} Jan 24 07:10:32 crc kubenswrapper[4675]: I0124 07:10:32.501628 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:32 crc kubenswrapper[4675]: I0124 07:10:32.522145 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" podStartSLOduration=3.697337798 podStartE2EDuration="19.52212553s" podCreationTimestamp="2026-01-24 07:10:13 +0000 UTC" firstStartedPulling="2026-01-24 07:10:14.791693774 +0000 UTC m=+1016.087798997" lastFinishedPulling="2026-01-24 07:10:30.616481506 +0000 UTC m=+1031.912586729" observedRunningTime="2026-01-24 07:10:32.517012087 +0000 UTC m=+1033.813117310" watchObservedRunningTime="2026-01-24 07:10:32.52212553 +0000 UTC m=+1033.818230753" Jan 24 07:10:32 crc kubenswrapper[4675]: I0124 07:10:32.964629 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a81ceb-1fea-49de-8e4a-3f4b1dabaa55" path="/var/lib/kubelet/pods/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55/volumes" Jan 24 07:10:34 crc kubenswrapper[4675]: I0124 07:10:34.516964 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"19fa54da-8a94-427d-b8c6-0881657d3324","Type":"ContainerStarted","Data":"62f5fb1b55e2d41e9410e6406c995f2f94bee324c32adb15f7946aa6c918cf36"} Jan 24 07:10:39 crc kubenswrapper[4675]: I0124 07:10:39.347806 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:39 crc kubenswrapper[4675]: I0124 07:10:39.430459 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rrp8m"] Jan 24 07:10:44 crc kubenswrapper[4675]: E0124 07:10:44.145803 4675 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 24 07:10:44 crc kubenswrapper[4675]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 24 07:10:44 crc kubenswrapper[4675]: > podSandboxID="23bfc54e831fd43388b201fd9b3294d34b2fc6a04e54c0d273f63f0d56af9242" Jan 24 07:10:44 crc kubenswrapper[4675]: E0124 07:10:44.146324 4675 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 24 07:10:44 crc kubenswrapper[4675]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwc6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-rrp8m_openstack(347ffd58-e301-4dd3-9416-2d6fa5ffdaa7): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 24 07:10:44 crc kubenswrapper[4675]: > logger="UnhandledError" Jan 24 07:10:44 crc kubenswrapper[4675]: E0124 07:10:44.148183 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" podUID="347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" Jan 24 07:10:44 crc kubenswrapper[4675]: I0124 07:10:44.594713 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b2446e52-3d97-46f2-ac99-4bb1af82d302","Type":"ContainerStarted","Data":"b78347d701dab73f2a9ce94675d3875d5d44005c42b88778f2c4040087df0298"} Jan 24 07:10:44 crc kubenswrapper[4675]: I0124 07:10:44.617986 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.587104302 podStartE2EDuration="27.617966626s" podCreationTimestamp="2026-01-24 07:10:17 +0000 UTC" firstStartedPulling="2026-01-24 07:10:29.046499541 +0000 UTC m=+1030.342604774" lastFinishedPulling="2026-01-24 07:10:40.077361875 +0000 UTC m=+1041.373467098" observedRunningTime="2026-01-24 07:10:44.61312473 +0000 UTC m=+1045.909229963" watchObservedRunningTime="2026-01-24 07:10:44.617966626 +0000 UTC m=+1045.914071849" Jan 24 07:10:44 crc kubenswrapper[4675]: I0124 07:10:44.901029 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.029897 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-config\") pod \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.029977 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwc6h\" (UniqueName: \"kubernetes.io/projected/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-kube-api-access-qwc6h\") pod \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.029995 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-dns-svc\") pod \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.041129 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-kube-api-access-qwc6h" (OuterVolumeSpecName: "kube-api-access-qwc6h") pod "347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" (UID: "347ffd58-e301-4dd3-9416-2d6fa5ffdaa7"). InnerVolumeSpecName "kube-api-access-qwc6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.086269 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-config" (OuterVolumeSpecName: "config") pod "347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" (UID: "347ffd58-e301-4dd3-9416-2d6fa5ffdaa7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.132114 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.132148 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwc6h\" (UniqueName: \"kubernetes.io/projected/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-kube-api-access-qwc6h\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.191213 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" (UID: "347ffd58-e301-4dd3-9416-2d6fa5ffdaa7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.234235 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.606573 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" event={"ID":"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7","Type":"ContainerDied","Data":"23bfc54e831fd43388b201fd9b3294d34b2fc6a04e54c0d273f63f0d56af9242"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.606631 4675 scope.go:117] "RemoveContainer" containerID="db00382708b7c809b6812f592aaa217f75f3715a6895df49399a3befed2578cd" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.606771 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.623095 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"740dfadf-4d28-4f03-ab2c-cf51c7e078bf","Type":"ContainerStarted","Data":"4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.624071 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.627600 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fsln2" event={"ID":"feda0648-be0d-4fb4-a3a4-42440e47fec0","Type":"ContainerStarted","Data":"f92666dfa3d23140f8f187d267cdc1bb27ed28fbdc3af8b599e8f1268441877d"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.629299 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e189b411-9dd6-496f-a001-41bc90c3fe00","Type":"ContainerStarted","Data":"0ae21b832453ed4f327a6995ee446a22269efa1f7b1b840709bec51995212ba9"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.632964 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2x2kb" event={"ID":"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1","Type":"ContainerStarted","Data":"622c76f05e7ba7105e7db895c809f2f5474704e27760f7f3c7dc749d61df9341"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.650451 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.689422 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"009254f3-9d76-4d89-8e35-d2b4c4be0da8","Type":"ContainerStarted","Data":"fccd7353bc9b512dab19a9626b8d14920cde36940e4eb90c09849fdd3c88cc40"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.693135 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"19fa54da-8a94-427d-b8c6-0881657d3324","Type":"ContainerStarted","Data":"f82855f31cbb83691523304e204ae6b84c2869b08b4b226e08ae4ccd93488600"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.697791 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f1d973fa-2671-49fe-82f1-1862aa70d784","Type":"ContainerStarted","Data":"40239ed7d0caf1c06b37d63d9177a204a011895a4300b6c52ecc8ee1f76c5f4a"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.697975 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.724302 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2x2kb" podStartSLOduration=10.313119387 podStartE2EDuration="22.724268102s" podCreationTimestamp="2026-01-24 07:10:23 +0000 UTC" firstStartedPulling="2026-01-24 07:10:31.256278604 +0000 UTC m=+1032.552383827" lastFinishedPulling="2026-01-24 07:10:43.667427319 +0000 UTC m=+1044.963532542" observedRunningTime="2026-01-24 07:10:45.720274536 +0000 UTC m=+1047.016379759" watchObservedRunningTime="2026-01-24 07:10:45.724268102 +0000 UTC m=+1047.020373325" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.774700 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.019901473 podStartE2EDuration="26.774677095s" podCreationTimestamp="2026-01-24 07:10:19 +0000 UTC" firstStartedPulling="2026-01-24 07:10:31.049904288 +0000 UTC m=+1032.346009511" lastFinishedPulling="2026-01-24 07:10:44.80467991 +0000 UTC m=+1046.100785133" observedRunningTime="2026-01-24 07:10:45.768423145 +0000 UTC m=+1047.064528368" watchObservedRunningTime="2026-01-24 07:10:45.774677095 +0000 UTC m=+1047.070782318" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.839481 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rrp8m"] Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.841437 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rrp8m"] Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.709859 4675 generic.go:334] "Generic (PLEG): container finished" podID="feda0648-be0d-4fb4-a3a4-42440e47fec0" containerID="f92666dfa3d23140f8f187d267cdc1bb27ed28fbdc3af8b599e8f1268441877d" exitCode=0 Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.710259 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fsln2" event={"ID":"feda0648-be0d-4fb4-a3a4-42440e47fec0","Type":"ContainerDied","Data":"f92666dfa3d23140f8f187d267cdc1bb27ed28fbdc3af8b599e8f1268441877d"} Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.712481 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ed4c9b-a365-46aa-95d7-7be5d2cc354a","Type":"ContainerStarted","Data":"78ce6643db3a1b1549c4015afb11eee3ac5a9eb412378d961f3105790aac9761"} Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.715886 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c","Type":"ContainerStarted","Data":"3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a"} Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.805047 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-b7pft"] Jan 24 07:10:46 crc kubenswrapper[4675]: E0124 07:10:46.805335 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" containerName="init" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.805350 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" containerName="init" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.805479 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" containerName="init" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.805987 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.813138 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.861483 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b7pft"] Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.880474 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0062ff-7e89-4c55-8796-de1c9e311dd2-config\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.880548 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1e0062ff-7e89-4c55-8796-de1c9e311dd2-ovs-rundir\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.880571 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0062ff-7e89-4c55-8796-de1c9e311dd2-combined-ca-bundle\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.880617 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khc6z\" (UniqueName: \"kubernetes.io/projected/1e0062ff-7e89-4c55-8796-de1c9e311dd2-kube-api-access-khc6z\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.880641 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1e0062ff-7e89-4c55-8796-de1c9e311dd2-ovn-rundir\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.880663 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e0062ff-7e89-4c55-8796-de1c9e311dd2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.964801 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" path="/var/lib/kubelet/pods/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7/volumes" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.981889 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1e0062ff-7e89-4c55-8796-de1c9e311dd2-ovn-rundir\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.982211 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1e0062ff-7e89-4c55-8796-de1c9e311dd2-ovn-rundir\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.982260 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khc6z\" (UniqueName: \"kubernetes.io/projected/1e0062ff-7e89-4c55-8796-de1c9e311dd2-kube-api-access-khc6z\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.982289 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e0062ff-7e89-4c55-8796-de1c9e311dd2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.982358 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0062ff-7e89-4c55-8796-de1c9e311dd2-config\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.982409 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1e0062ff-7e89-4c55-8796-de1c9e311dd2-ovs-rundir\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.982428 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0062ff-7e89-4c55-8796-de1c9e311dd2-combined-ca-bundle\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.984137 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1e0062ff-7e89-4c55-8796-de1c9e311dd2-ovs-rundir\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.986010 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0062ff-7e89-4c55-8796-de1c9e311dd2-config\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.988232 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cfmfx"] Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.988587 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e0062ff-7e89-4c55-8796-de1c9e311dd2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.989828 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.993133 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0062ff-7e89-4c55-8796-de1c9e311dd2-combined-ca-bundle\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.006170 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.016128 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cfmfx"] Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.035924 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khc6z\" (UniqueName: \"kubernetes.io/projected/1e0062ff-7e89-4c55-8796-de1c9e311dd2-kube-api-access-khc6z\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.084675 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.084729 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7x6t\" (UniqueName: \"kubernetes.io/projected/2ea20e09-7a89-4bb3-9413-ac6a647743d5-kube-api-access-x7x6t\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.084766 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-config\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.084783 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.152003 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.186334 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.186390 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7x6t\" (UniqueName: \"kubernetes.io/projected/2ea20e09-7a89-4bb3-9413-ac6a647743d5-kube-api-access-x7x6t\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.186438 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.186458 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-config\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.187584 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-config\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.187707 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.188241 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.238803 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7x6t\" (UniqueName: \"kubernetes.io/projected/2ea20e09-7a89-4bb3-9413-ac6a647743d5-kube-api-access-x7x6t\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.284221 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cfmfx"] Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.284862 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.313180 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-8gnzm"] Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.314381 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.340512 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.377173 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8gnzm"] Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.400852 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sgk5\" (UniqueName: \"kubernetes.io/projected/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-kube-api-access-2sgk5\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.400902 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-dns-svc\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.400927 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.401042 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.401323 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-config\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.502484 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sgk5\" (UniqueName: \"kubernetes.io/projected/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-kube-api-access-2sgk5\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.502548 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-dns-svc\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.502569 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.502606 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.502684 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-config\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.503562 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-config\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.505741 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.506308 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.507088 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-dns-svc\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.525688 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sgk5\" (UniqueName: \"kubernetes.io/projected/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-kube-api-access-2sgk5\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.692841 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.727746 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fsln2" event={"ID":"feda0648-be0d-4fb4-a3a4-42440e47fec0","Type":"ContainerStarted","Data":"aa0a5d8e8aac7f7b87343b538181792fc356629bd801832810e2fb71541e6be6"} Jan 24 07:10:49 crc kubenswrapper[4675]: I0124 07:10:49.775180 4675 generic.go:334] "Generic (PLEG): container finished" podID="009254f3-9d76-4d89-8e35-d2b4c4be0da8" containerID="fccd7353bc9b512dab19a9626b8d14920cde36940e4eb90c09849fdd3c88cc40" exitCode=0 Jan 24 07:10:49 crc kubenswrapper[4675]: I0124 07:10:49.775279 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"009254f3-9d76-4d89-8e35-d2b4c4be0da8","Type":"ContainerDied","Data":"fccd7353bc9b512dab19a9626b8d14920cde36940e4eb90c09849fdd3c88cc40"} Jan 24 07:10:49 crc kubenswrapper[4675]: I0124 07:10:49.867420 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8gnzm"] Jan 24 07:10:49 crc kubenswrapper[4675]: W0124 07:10:49.884981 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd380ff5f_2ad7_495e_8cd4_2df178c2cd02.slice/crio-21e3c05cf504ff346743ae080d530a42394925a2cfab5c9696da02df66a7400b WatchSource:0}: Error finding container 21e3c05cf504ff346743ae080d530a42394925a2cfab5c9696da02df66a7400b: Status 404 returned error can't find the container with id 21e3c05cf504ff346743ae080d530a42394925a2cfab5c9696da02df66a7400b Jan 24 07:10:49 crc kubenswrapper[4675]: I0124 07:10:49.961735 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cfmfx"] Jan 24 07:10:50 crc kubenswrapper[4675]: W0124 07:10:50.004261 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ea20e09_7a89_4bb3_9413_ac6a647743d5.slice/crio-c96bf28acd52cf83134ac7d60f70f9d99693ae2900076af5c5e5e6b6e4bd6a05 WatchSource:0}: Error finding container c96bf28acd52cf83134ac7d60f70f9d99693ae2900076af5c5e5e6b6e4bd6a05: Status 404 returned error can't find the container with id c96bf28acd52cf83134ac7d60f70f9d99693ae2900076af5c5e5e6b6e4bd6a05 Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.067018 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b7pft"] Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.178277 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.799871 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b7pft" event={"ID":"1e0062ff-7e89-4c55-8796-de1c9e311dd2","Type":"ContainerStarted","Data":"7dfc84b6b922b8d7ec493d0f3e73628bc4680f101d2a7964814de75ae82eded3"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.800428 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b7pft" event={"ID":"1e0062ff-7e89-4c55-8796-de1c9e311dd2","Type":"ContainerStarted","Data":"a2ee403aad04d5585208f77253209b388e25840601ddfe9a671d037335a615c7"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.805203 4675 generic.go:334] "Generic (PLEG): container finished" podID="2ea20e09-7a89-4bb3-9413-ac6a647743d5" containerID="624d2ca0254053a07dea6ae65d180d0383f2a74d415ec75092f490a0bf16f7ec" exitCode=0 Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.805281 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" event={"ID":"2ea20e09-7a89-4bb3-9413-ac6a647743d5","Type":"ContainerDied","Data":"624d2ca0254053a07dea6ae65d180d0383f2a74d415ec75092f490a0bf16f7ec"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.805308 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" event={"ID":"2ea20e09-7a89-4bb3-9413-ac6a647743d5","Type":"ContainerStarted","Data":"c96bf28acd52cf83134ac7d60f70f9d99693ae2900076af5c5e5e6b6e4bd6a05"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.831035 4675 generic.go:334] "Generic (PLEG): container finished" podID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerID="09845de0d6cfe71d82646bcb54bb06d9aef2cb9d4719b6cc5abade59a5012412" exitCode=0 Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.831166 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8gnzm" event={"ID":"d380ff5f-2ad7-495e-8cd4-2df178c2cd02","Type":"ContainerDied","Data":"09845de0d6cfe71d82646bcb54bb06d9aef2cb9d4719b6cc5abade59a5012412"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.831203 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8gnzm" event={"ID":"d380ff5f-2ad7-495e-8cd4-2df178c2cd02","Type":"ContainerStarted","Data":"21e3c05cf504ff346743ae080d530a42394925a2cfab5c9696da02df66a7400b"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.838430 4675 generic.go:334] "Generic (PLEG): container finished" podID="e189b411-9dd6-496f-a001-41bc90c3fe00" containerID="0ae21b832453ed4f327a6995ee446a22269efa1f7b1b840709bec51995212ba9" exitCode=0 Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.838788 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e189b411-9dd6-496f-a001-41bc90c3fe00","Type":"ContainerDied","Data":"0ae21b832453ed4f327a6995ee446a22269efa1f7b1b840709bec51995212ba9"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.839585 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-b7pft" podStartSLOduration=4.839573824 podStartE2EDuration="4.839573824s" podCreationTimestamp="2026-01-24 07:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:10:50.832949425 +0000 UTC m=+1052.129054688" watchObservedRunningTime="2026-01-24 07:10:50.839573824 +0000 UTC m=+1052.135679057" Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.843469 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"009254f3-9d76-4d89-8e35-d2b4c4be0da8","Type":"ContainerStarted","Data":"7d706f3d1615592ecc3cfb08c32c63f350963a71eaf64e1c60fc01af77d17b35"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.857628 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"19fa54da-8a94-427d-b8c6-0881657d3324","Type":"ContainerStarted","Data":"33c31370ac708fbd811b180b4aec0d14167eefe1861c7fd1a105ad4e8d5bc995"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.860045 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f1d973fa-2671-49fe-82f1-1862aa70d784","Type":"ContainerStarted","Data":"77aff0e3913197141e772ab9bf145c2690bf6faeedd85b5dab59a3e557b6bd0c"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.874083 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fsln2" event={"ID":"feda0648-be0d-4fb4-a3a4-42440e47fec0","Type":"ContainerStarted","Data":"8f106c7b12eaf8cb669841601e806e0f5ea0f21c800ba11b14b623b4f2aa41cf"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.874513 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.879277 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.975736 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.504361822 podStartE2EDuration="25.97569764s" podCreationTimestamp="2026-01-24 07:10:25 +0000 UTC" firstStartedPulling="2026-01-24 07:10:31.213137106 +0000 UTC m=+1032.509242319" lastFinishedPulling="2026-01-24 07:10:49.684472874 +0000 UTC m=+1050.980578137" observedRunningTime="2026-01-24 07:10:50.974162313 +0000 UTC m=+1052.270267546" watchObservedRunningTime="2026-01-24 07:10:50.97569764 +0000 UTC m=+1052.271802873" Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.976067 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.455461292 podStartE2EDuration="35.976060249s" podCreationTimestamp="2026-01-24 07:10:15 +0000 UTC" firstStartedPulling="2026-01-24 07:10:30.922660655 +0000 UTC m=+1032.218765878" lastFinishedPulling="2026-01-24 07:10:40.443259612 +0000 UTC m=+1041.739364835" observedRunningTime="2026-01-24 07:10:50.939345895 +0000 UTC m=+1052.235451118" watchObservedRunningTime="2026-01-24 07:10:50.976060249 +0000 UTC m=+1052.272165482" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.047889 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.114203235 podStartE2EDuration="28.047873028s" podCreationTimestamp="2026-01-24 07:10:23 +0000 UTC" firstStartedPulling="2026-01-24 07:10:33.798928419 +0000 UTC m=+1035.095033642" lastFinishedPulling="2026-01-24 07:10:49.732598212 +0000 UTC m=+1051.028703435" observedRunningTime="2026-01-24 07:10:51.047807516 +0000 UTC m=+1052.343912759" watchObservedRunningTime="2026-01-24 07:10:51.047873028 +0000 UTC m=+1052.343978251" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.110764 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-fsln2" podStartSLOduration=18.357298261 podStartE2EDuration="28.110745971s" podCreationTimestamp="2026-01-24 07:10:23 +0000 UTC" firstStartedPulling="2026-01-24 07:10:31.430495288 +0000 UTC m=+1032.726600511" lastFinishedPulling="2026-01-24 07:10:41.183942958 +0000 UTC m=+1042.480048221" observedRunningTime="2026-01-24 07:10:51.10909329 +0000 UTC m=+1052.405198513" watchObservedRunningTime="2026-01-24 07:10:51.110745971 +0000 UTC m=+1052.406851194" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.287458 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.300874 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.331071 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.380277 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.387179 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-ovsdbserver-nb\") pod \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.387410 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7x6t\" (UniqueName: \"kubernetes.io/projected/2ea20e09-7a89-4bb3-9413-ac6a647743d5-kube-api-access-x7x6t\") pod \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.387546 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-config\") pod \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.387575 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-dns-svc\") pod \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.392647 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea20e09-7a89-4bb3-9413-ac6a647743d5-kube-api-access-x7x6t" (OuterVolumeSpecName: "kube-api-access-x7x6t") pod "2ea20e09-7a89-4bb3-9413-ac6a647743d5" (UID: "2ea20e09-7a89-4bb3-9413-ac6a647743d5"). InnerVolumeSpecName "kube-api-access-x7x6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.405958 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ea20e09-7a89-4bb3-9413-ac6a647743d5" (UID: "2ea20e09-7a89-4bb3-9413-ac6a647743d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.409529 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-config" (OuterVolumeSpecName: "config") pod "2ea20e09-7a89-4bb3-9413-ac6a647743d5" (UID: "2ea20e09-7a89-4bb3-9413-ac6a647743d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.424424 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ea20e09-7a89-4bb3-9413-ac6a647743d5" (UID: "2ea20e09-7a89-4bb3-9413-ac6a647743d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.444050 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.489424 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.489454 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.489464 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.489476 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7x6t\" (UniqueName: \"kubernetes.io/projected/2ea20e09-7a89-4bb3-9413-ac6a647743d5-kube-api-access-x7x6t\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.885575 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.885578 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" event={"ID":"2ea20e09-7a89-4bb3-9413-ac6a647743d5","Type":"ContainerDied","Data":"c96bf28acd52cf83134ac7d60f70f9d99693ae2900076af5c5e5e6b6e4bd6a05"} Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.885840 4675 scope.go:117] "RemoveContainer" containerID="624d2ca0254053a07dea6ae65d180d0383f2a74d415ec75092f490a0bf16f7ec" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.888872 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8gnzm" event={"ID":"d380ff5f-2ad7-495e-8cd4-2df178c2cd02","Type":"ContainerStarted","Data":"b564f9e16029d5ca253f041ec1a749aa789ea08cb8ab0c837db92a65d8468a91"} Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.889044 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.893132 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e189b411-9dd6-496f-a001-41bc90c3fe00","Type":"ContainerStarted","Data":"ac12533ccfb5e6056fb6c87c15914ef5208e4b47626f95614889cacd8b4ba640"} Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.894573 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.894598 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.936707 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-8gnzm" podStartSLOduration=4.936684138 podStartE2EDuration="4.936684138s" podCreationTimestamp="2026-01-24 07:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:10:51.926425751 +0000 UTC m=+1053.222531004" watchObservedRunningTime="2026-01-24 07:10:51.936684138 +0000 UTC m=+1053.232789371" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.991544 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.994310 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.010112 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cfmfx"] Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.021000 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cfmfx"] Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.037686 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.619006118 podStartE2EDuration="36.037670409s" podCreationTimestamp="2026-01-24 07:10:16 +0000 UTC" firstStartedPulling="2026-01-24 07:10:30.790361601 +0000 UTC m=+1032.086466824" lastFinishedPulling="2026-01-24 07:10:41.209025852 +0000 UTC m=+1042.505131115" observedRunningTime="2026-01-24 07:10:52.036309957 +0000 UTC m=+1053.332415180" watchObservedRunningTime="2026-01-24 07:10:52.037670409 +0000 UTC m=+1053.333775632" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.306018 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 24 07:10:52 crc kubenswrapper[4675]: E0124 07:10:52.306348 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea20e09-7a89-4bb3-9413-ac6a647743d5" containerName="init" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.306362 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea20e09-7a89-4bb3-9413-ac6a647743d5" containerName="init" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.306528 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea20e09-7a89-4bb3-9413-ac6a647743d5" containerName="init" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.341844 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.350095 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-6ffzp" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.350315 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.350507 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.350854 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.361473 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.506782 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.506856 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm866\" (UniqueName: \"kubernetes.io/projected/daf62505-a3ad-4c12-a520-4d412d26a71c-kube-api-access-hm866\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.506889 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf62505-a3ad-4c12-a520-4d412d26a71c-config\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.506961 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/daf62505-a3ad-4c12-a520-4d412d26a71c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.506997 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.507019 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daf62505-a3ad-4c12-a520-4d412d26a71c-scripts\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.507077 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.607937 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.608051 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.608086 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm866\" (UniqueName: \"kubernetes.io/projected/daf62505-a3ad-4c12-a520-4d412d26a71c-kube-api-access-hm866\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.608711 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf62505-a3ad-4c12-a520-4d412d26a71c-config\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.608749 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/daf62505-a3ad-4c12-a520-4d412d26a71c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.608778 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.608793 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daf62505-a3ad-4c12-a520-4d412d26a71c-scripts\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.609392 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/daf62505-a3ad-4c12-a520-4d412d26a71c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.609500 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daf62505-a3ad-4c12-a520-4d412d26a71c-scripts\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.611907 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf62505-a3ad-4c12-a520-4d412d26a71c-config\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.612034 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.612181 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.612477 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.632444 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm866\" (UniqueName: \"kubernetes.io/projected/daf62505-a3ad-4c12-a520-4d412d26a71c-kube-api-access-hm866\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.689543 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.962677 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea20e09-7a89-4bb3-9413-ac6a647743d5" path="/var/lib/kubelet/pods/2ea20e09-7a89-4bb3-9413-ac6a647743d5/volumes" Jan 24 07:10:53 crc kubenswrapper[4675]: I0124 07:10:53.226503 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 24 07:10:53 crc kubenswrapper[4675]: W0124 07:10:53.235509 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaf62505_a3ad_4c12_a520_4d412d26a71c.slice/crio-aa2abe9eb5a704a3126060f9b44519709ef6c3797e24d190e88cf78411612b80 WatchSource:0}: Error finding container aa2abe9eb5a704a3126060f9b44519709ef6c3797e24d190e88cf78411612b80: Status 404 returned error can't find the container with id aa2abe9eb5a704a3126060f9b44519709ef6c3797e24d190e88cf78411612b80 Jan 24 07:10:53 crc kubenswrapper[4675]: I0124 07:10:53.311912 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 24 07:10:53 crc kubenswrapper[4675]: I0124 07:10:53.913700 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"daf62505-a3ad-4c12-a520-4d412d26a71c","Type":"ContainerStarted","Data":"aa2abe9eb5a704a3126060f9b44519709ef6c3797e24d190e88cf78411612b80"} Jan 24 07:10:56 crc kubenswrapper[4675]: I0124 07:10:56.662970 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 24 07:10:56 crc kubenswrapper[4675]: I0124 07:10:56.663386 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.694843 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.760142 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c9krd"] Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.760432 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerName="dnsmasq-dns" containerID="cri-o://f8e3e1f6d660872a4660f41b43fb3388701507fbf656a5b98d821dd5726064e7" gracePeriod=10 Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.885875 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.885908 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.956909 4675 generic.go:334] "Generic (PLEG): container finished" podID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerID="f8e3e1f6d660872a4660f41b43fb3388701507fbf656a5b98d821dd5726064e7" exitCode=0 Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.956968 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" event={"ID":"3186ca49-238e-418a-95e7-f857a9f3bd75","Type":"ContainerDied","Data":"f8e3e1f6d660872a4660f41b43fb3388701507fbf656a5b98d821dd5726064e7"} Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.961594 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"daf62505-a3ad-4c12-a520-4d412d26a71c","Type":"ContainerStarted","Data":"04d2e939f0ac5cefeea7239fece06b85d9d7a340dc4a96ee172bffd1ef2cd7b6"} Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.240370 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.298569 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msf89\" (UniqueName: \"kubernetes.io/projected/3186ca49-238e-418a-95e7-f857a9f3bd75-kube-api-access-msf89\") pod \"3186ca49-238e-418a-95e7-f857a9f3bd75\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.298666 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-config\") pod \"3186ca49-238e-418a-95e7-f857a9f3bd75\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.298703 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-dns-svc\") pod \"3186ca49-238e-418a-95e7-f857a9f3bd75\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.304845 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3186ca49-238e-418a-95e7-f857a9f3bd75-kube-api-access-msf89" (OuterVolumeSpecName: "kube-api-access-msf89") pod "3186ca49-238e-418a-95e7-f857a9f3bd75" (UID: "3186ca49-238e-418a-95e7-f857a9f3bd75"). InnerVolumeSpecName "kube-api-access-msf89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.333413 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3186ca49-238e-418a-95e7-f857a9f3bd75" (UID: "3186ca49-238e-418a-95e7-f857a9f3bd75"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.333969 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-config" (OuterVolumeSpecName: "config") pod "3186ca49-238e-418a-95e7-f857a9f3bd75" (UID: "3186ca49-238e-418a-95e7-f857a9f3bd75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.400804 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.400843 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.400853 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msf89\" (UniqueName: \"kubernetes.io/projected/3186ca49-238e-418a-95e7-f857a9f3bd75-kube-api-access-msf89\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.974777 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" event={"ID":"3186ca49-238e-418a-95e7-f857a9f3bd75","Type":"ContainerDied","Data":"68e5c9046a1ff1a25c54ddb1f4fe8acfd229b68840f96e465631f88826708436"} Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.975118 4675 scope.go:117] "RemoveContainer" containerID="f8e3e1f6d660872a4660f41b43fb3388701507fbf656a5b98d821dd5726064e7" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.975393 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.983738 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"daf62505-a3ad-4c12-a520-4d412d26a71c","Type":"ContainerStarted","Data":"d1e77680cf5fe5ab2e2141e5c28147de413a5c8c310ece6924d2e9b488398ee2"} Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.984005 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 24 07:10:59 crc kubenswrapper[4675]: I0124 07:10:59.005916 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c9krd"] Jan 24 07:10:59 crc kubenswrapper[4675]: I0124 07:10:59.006475 4675 scope.go:117] "RemoveContainer" containerID="4fd0f48bc136df29146a9e239c77e392eeb5ff8cf314ea027a498f9dbf5099cb" Jan 24 07:10:59 crc kubenswrapper[4675]: I0124 07:10:59.026571 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c9krd"] Jan 24 07:10:59 crc kubenswrapper[4675]: I0124 07:10:59.033896 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.843905377 podStartE2EDuration="7.033875649s" podCreationTimestamp="2026-01-24 07:10:52 +0000 UTC" firstStartedPulling="2026-01-24 07:10:53.238193032 +0000 UTC m=+1054.534298255" lastFinishedPulling="2026-01-24 07:10:57.428163304 +0000 UTC m=+1058.724268527" observedRunningTime="2026-01-24 07:10:59.020827165 +0000 UTC m=+1060.316932398" watchObservedRunningTime="2026-01-24 07:10:59.033875649 +0000 UTC m=+1060.329980872" Jan 24 07:10:59 crc kubenswrapper[4675]: I0124 07:10:59.428000 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 24 07:10:59 crc kubenswrapper[4675]: I0124 07:10:59.525284 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.287868 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mtp78"] Jan 24 07:11:00 crc kubenswrapper[4675]: E0124 07:11:00.288579 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerName="dnsmasq-dns" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.288596 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerName="dnsmasq-dns" Jan 24 07:11:00 crc kubenswrapper[4675]: E0124 07:11:00.288619 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerName="init" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.288627 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerName="init" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.293032 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerName="dnsmasq-dns" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.294197 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.318705 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mtp78"] Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.336458 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.336524 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-config\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.336554 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.336633 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjfhw\" (UniqueName: \"kubernetes.io/projected/b1e65888-5032-411e-8910-5438e0aff32f-kube-api-access-gjfhw\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.336683 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.437628 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjfhw\" (UniqueName: \"kubernetes.io/projected/b1e65888-5032-411e-8910-5438e0aff32f-kube-api-access-gjfhw\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.437689 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.437813 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.437852 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-config\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.437879 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.438668 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.439020 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.442267 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-config\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.442889 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.471761 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjfhw\" (UniqueName: \"kubernetes.io/projected/b1e65888-5032-411e-8910-5438e0aff32f-kube-api-access-gjfhw\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.639516 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.950371 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" path="/var/lib/kubelet/pods/3186ca49-238e-418a-95e7-f857a9f3bd75/volumes" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.076156 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mtp78"] Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.544930 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.551364 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.551389 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.554262 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.555518 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.556857 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.557587 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-sspq9" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.745346 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cf53054f-7616-43d6-9aeb-eb5f880b6e40-lock\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.745404 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn7tp\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-kube-api-access-fn7tp\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.745438 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.745550 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cf53054f-7616-43d6-9aeb-eb5f880b6e40-cache\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.745690 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.745803 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf53054f-7616-43d6-9aeb-eb5f880b6e40-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.846886 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cf53054f-7616-43d6-9aeb-eb5f880b6e40-cache\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.846956 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.846991 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf53054f-7616-43d6-9aeb-eb5f880b6e40-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.847048 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cf53054f-7616-43d6-9aeb-eb5f880b6e40-lock\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.847078 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7tp\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-kube-api-access-fn7tp\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.847108 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: E0124 07:11:01.847216 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 07:11:01 crc kubenswrapper[4675]: E0124 07:11:01.847228 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 07:11:01 crc kubenswrapper[4675]: E0124 07:11:01.847266 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift podName:cf53054f-7616-43d6-9aeb-eb5f880b6e40 nodeName:}" failed. No retries permitted until 2026-01-24 07:11:02.34724994 +0000 UTC m=+1063.643355153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift") pod "swift-storage-0" (UID: "cf53054f-7616-43d6-9aeb-eb5f880b6e40") : configmap "swift-ring-files" not found Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.847343 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.847362 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cf53054f-7616-43d6-9aeb-eb5f880b6e40-cache\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.847701 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cf53054f-7616-43d6-9aeb-eb5f880b6e40-lock\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.852568 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf53054f-7616-43d6-9aeb-eb5f880b6e40-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.867831 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.875795 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7tp\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-kube-api-access-fn7tp\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:02 crc kubenswrapper[4675]: I0124 07:11:02.013700 4675 generic.go:334] "Generic (PLEG): container finished" podID="b1e65888-5032-411e-8910-5438e0aff32f" containerID="ae8e22c487bc5bca69369f08e9cf6514b43a32b610d114fdbb4d48fac338177d" exitCode=0 Jan 24 07:11:02 crc kubenswrapper[4675]: I0124 07:11:02.013807 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" event={"ID":"b1e65888-5032-411e-8910-5438e0aff32f","Type":"ContainerDied","Data":"ae8e22c487bc5bca69369f08e9cf6514b43a32b610d114fdbb4d48fac338177d"} Jan 24 07:11:02 crc kubenswrapper[4675]: I0124 07:11:02.013838 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" event={"ID":"b1e65888-5032-411e-8910-5438e0aff32f","Type":"ContainerStarted","Data":"e4e440f949a16c7c92a1572ceb6020eb2c0abbdd347846f7e3ad225704016290"} Jan 24 07:11:02 crc kubenswrapper[4675]: I0124 07:11:02.034536 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 24 07:11:02 crc kubenswrapper[4675]: I0124 07:11:02.142089 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 24 07:11:02 crc kubenswrapper[4675]: I0124 07:11:02.359084 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:02 crc kubenswrapper[4675]: E0124 07:11:02.359276 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 07:11:02 crc kubenswrapper[4675]: E0124 07:11:02.359292 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 07:11:02 crc kubenswrapper[4675]: E0124 07:11:02.359344 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift podName:cf53054f-7616-43d6-9aeb-eb5f880b6e40 nodeName:}" failed. No retries permitted until 2026-01-24 07:11:03.359325733 +0000 UTC m=+1064.655430956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift") pod "swift-storage-0" (UID: "cf53054f-7616-43d6-9aeb-eb5f880b6e40") : configmap "swift-ring-files" not found Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.023081 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" event={"ID":"b1e65888-5032-411e-8910-5438e0aff32f","Type":"ContainerStarted","Data":"c6e71287ec7fd966046c5d90ff95c855b676a7ce9888a7f83191c7628a04df41"} Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.044224 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podStartSLOduration=3.044205557 podStartE2EDuration="3.044205557s" podCreationTimestamp="2026-01-24 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:03.040981649 +0000 UTC m=+1064.337086882" watchObservedRunningTime="2026-01-24 07:11:03.044205557 +0000 UTC m=+1064.340310781" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.377520 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:03 crc kubenswrapper[4675]: E0124 07:11:03.377663 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 07:11:03 crc kubenswrapper[4675]: E0124 07:11:03.377690 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 07:11:03 crc kubenswrapper[4675]: E0124 07:11:03.377759 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift podName:cf53054f-7616-43d6-9aeb-eb5f880b6e40 nodeName:}" failed. No retries permitted until 2026-01-24 07:11:05.377741105 +0000 UTC m=+1066.673846328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift") pod "swift-storage-0" (UID: "cf53054f-7616-43d6-9aeb-eb5f880b6e40") : configmap "swift-ring-files" not found Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.572093 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e5bb-account-create-update-r9xsl"] Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.573092 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.579438 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.584809 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gqpfm"] Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.597373 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.628493 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e5bb-account-create-update-r9xsl"] Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.635641 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gqpfm"] Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.684381 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade78eac-6799-49f4-b0ea-2f3dcb21273e-operator-scripts\") pod \"glance-e5bb-account-create-update-r9xsl\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.684498 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhpf2\" (UniqueName: \"kubernetes.io/projected/ade78eac-6799-49f4-b0ea-2f3dcb21273e-kube-api-access-bhpf2\") pod \"glance-e5bb-account-create-update-r9xsl\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.785599 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade78eac-6799-49f4-b0ea-2f3dcb21273e-operator-scripts\") pod \"glance-e5bb-account-create-update-r9xsl\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.785667 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czznt\" (UniqueName: \"kubernetes.io/projected/147543ec-f687-430c-8a42-547c5861dbf4-kube-api-access-czznt\") pod \"glance-db-create-gqpfm\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.785700 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147543ec-f687-430c-8a42-547c5861dbf4-operator-scripts\") pod \"glance-db-create-gqpfm\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.785759 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhpf2\" (UniqueName: \"kubernetes.io/projected/ade78eac-6799-49f4-b0ea-2f3dcb21273e-kube-api-access-bhpf2\") pod \"glance-e5bb-account-create-update-r9xsl\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.786571 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade78eac-6799-49f4-b0ea-2f3dcb21273e-operator-scripts\") pod \"glance-e5bb-account-create-update-r9xsl\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.804743 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhpf2\" (UniqueName: \"kubernetes.io/projected/ade78eac-6799-49f4-b0ea-2f3dcb21273e-kube-api-access-bhpf2\") pod \"glance-e5bb-account-create-update-r9xsl\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.886827 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czznt\" (UniqueName: \"kubernetes.io/projected/147543ec-f687-430c-8a42-547c5861dbf4-kube-api-access-czznt\") pod \"glance-db-create-gqpfm\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.886879 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147543ec-f687-430c-8a42-547c5861dbf4-operator-scripts\") pod \"glance-db-create-gqpfm\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.887545 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147543ec-f687-430c-8a42-547c5861dbf4-operator-scripts\") pod \"glance-db-create-gqpfm\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.900576 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.903878 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czznt\" (UniqueName: \"kubernetes.io/projected/147543ec-f687-430c-8a42-547c5861dbf4-kube-api-access-czznt\") pod \"glance-db-create-gqpfm\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.920034 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:04 crc kubenswrapper[4675]: I0124 07:11:04.037789 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:04 crc kubenswrapper[4675]: I0124 07:11:04.522100 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e5bb-account-create-update-r9xsl"] Jan 24 07:11:04 crc kubenswrapper[4675]: I0124 07:11:04.619710 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gqpfm"] Jan 24 07:11:04 crc kubenswrapper[4675]: W0124 07:11:04.638117 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod147543ec_f687_430c_8a42_547c5861dbf4.slice/crio-e974e7137e990a8ee4f0ea9197e564147f42d5e1ef614ef9871d8fa4811b5b07 WatchSource:0}: Error finding container e974e7137e990a8ee4f0ea9197e564147f42d5e1ef614ef9871d8fa4811b5b07: Status 404 returned error can't find the container with id e974e7137e990a8ee4f0ea9197e564147f42d5e1ef614ef9871d8fa4811b5b07 Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.047481 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e5bb-account-create-update-r9xsl" event={"ID":"ade78eac-6799-49f4-b0ea-2f3dcb21273e","Type":"ContainerStarted","Data":"9e2bdeffa8a165fe95edf61087a0f3f330b590c20cac1b5409ef71c4c21879df"} Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.047865 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e5bb-account-create-update-r9xsl" event={"ID":"ade78eac-6799-49f4-b0ea-2f3dcb21273e","Type":"ContainerStarted","Data":"cc90ef57ed2f268af367d04ae88b666b28bec4ef07715447864520bd64348a57"} Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.049648 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gqpfm" event={"ID":"147543ec-f687-430c-8a42-547c5861dbf4","Type":"ContainerStarted","Data":"974d7fcdae70428bd478be3b3521612bb5892f56ced9fe76c6f84ebdcecc2fc2"} Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.049712 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gqpfm" event={"ID":"147543ec-f687-430c-8a42-547c5861dbf4","Type":"ContainerStarted","Data":"e974e7137e990a8ee4f0ea9197e564147f42d5e1ef614ef9871d8fa4811b5b07"} Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.080064 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e5bb-account-create-update-r9xsl" podStartSLOduration=2.080045175 podStartE2EDuration="2.080045175s" podCreationTimestamp="2026-01-24 07:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:05.067513103 +0000 UTC m=+1066.363618326" watchObservedRunningTime="2026-01-24 07:11:05.080045175 +0000 UTC m=+1066.376150398" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.084446 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-gqpfm" podStartSLOduration=2.08443359 podStartE2EDuration="2.08443359s" podCreationTimestamp="2026-01-24 07:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:05.078438115 +0000 UTC m=+1066.374543338" watchObservedRunningTime="2026-01-24 07:11:05.08443359 +0000 UTC m=+1066.380538813" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.304163 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wmshq"] Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.305398 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.310957 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.313931 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wmshq"] Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.412477 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-zjwxg"] Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.414321 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.416192 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.417678 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.421829 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkb86\" (UniqueName: \"kubernetes.io/projected/7cb298f5-b4c8-42df-8a3f-c1458b89e443-kube-api-access-jkb86\") pod \"root-account-create-update-wmshq\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.421869 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb298f5-b4c8-42df-8a3f-c1458b89e443-operator-scripts\") pod \"root-account-create-update-wmshq\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.421925 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:05 crc kubenswrapper[4675]: E0124 07:11:05.422375 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 07:11:05 crc kubenswrapper[4675]: E0124 07:11:05.422397 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 07:11:05 crc kubenswrapper[4675]: E0124 07:11:05.422439 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift podName:cf53054f-7616-43d6-9aeb-eb5f880b6e40 nodeName:}" failed. No retries permitted until 2026-01-24 07:11:09.422424364 +0000 UTC m=+1070.718529587 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift") pod "swift-storage-0" (UID: "cf53054f-7616-43d6-9aeb-eb5f880b6e40") : configmap "swift-ring-files" not found Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.425846 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.462568 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-sz46b"] Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.463578 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.470519 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-zjwxg"] Jan 24 07:11:05 crc kubenswrapper[4675]: E0124 07:11:05.471174 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-w4vn4 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-zjwxg" podUID="30abf472-a311-44dd-9853-cace1a1c41a9" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.480038 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sz46b"] Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526490 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkb86\" (UniqueName: \"kubernetes.io/projected/7cb298f5-b4c8-42df-8a3f-c1458b89e443-kube-api-access-jkb86\") pod \"root-account-create-update-wmshq\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526546 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb298f5-b4c8-42df-8a3f-c1458b89e443-operator-scripts\") pod \"root-account-create-update-wmshq\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526590 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-combined-ca-bundle\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526655 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-scripts\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526756 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-swiftconf\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526792 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-dispersionconf\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526820 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-ring-data-devices\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526842 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4vn4\" (UniqueName: \"kubernetes.io/projected/30abf472-a311-44dd-9853-cace1a1c41a9-kube-api-access-w4vn4\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526867 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30abf472-a311-44dd-9853-cace1a1c41a9-etc-swift\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.527830 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb298f5-b4c8-42df-8a3f-c1458b89e443-operator-scripts\") pod \"root-account-create-update-wmshq\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.549421 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkb86\" (UniqueName: \"kubernetes.io/projected/7cb298f5-b4c8-42df-8a3f-c1458b89e443-kube-api-access-jkb86\") pod \"root-account-create-update-wmshq\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.559428 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-zjwxg"] Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.622298 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628222 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-ring-data-devices\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628276 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-combined-ca-bundle\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628321 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-combined-ca-bundle\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628345 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-scripts\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628385 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-etc-swift\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628418 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-swiftconf\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628442 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-scripts\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628462 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzzv7\" (UniqueName: \"kubernetes.io/projected/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-kube-api-access-nzzv7\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628482 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-dispersionconf\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628501 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-swiftconf\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628523 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-dispersionconf\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628613 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-ring-data-devices\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628669 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4vn4\" (UniqueName: \"kubernetes.io/projected/30abf472-a311-44dd-9853-cace1a1c41a9-kube-api-access-w4vn4\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628735 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30abf472-a311-44dd-9853-cace1a1c41a9-etc-swift\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.629459 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-scripts\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.629644 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-ring-data-devices\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.630919 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30abf472-a311-44dd-9853-cace1a1c41a9-etc-swift\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.633096 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-swiftconf\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.635309 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-dispersionconf\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.637933 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-combined-ca-bundle\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.656917 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4vn4\" (UniqueName: \"kubernetes.io/projected/30abf472-a311-44dd-9853-cace1a1c41a9-kube-api-access-w4vn4\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730583 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-etc-swift\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730646 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-swiftconf\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730673 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-scripts\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730700 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzzv7\" (UniqueName: \"kubernetes.io/projected/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-kube-api-access-nzzv7\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730749 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-dispersionconf\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730833 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-ring-data-devices\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730884 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-combined-ca-bundle\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.731662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-scripts\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.732649 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-etc-swift\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.733177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-ring-data-devices\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.746426 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-swiftconf\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.747327 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-dispersionconf\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.747440 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-combined-ca-bundle\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.748271 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzzv7\" (UniqueName: \"kubernetes.io/projected/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-kube-api-access-nzzv7\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.785476 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.061360 4675 generic.go:334] "Generic (PLEG): container finished" podID="ade78eac-6799-49f4-b0ea-2f3dcb21273e" containerID="9e2bdeffa8a165fe95edf61087a0f3f330b590c20cac1b5409ef71c4c21879df" exitCode=0 Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.061417 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e5bb-account-create-update-r9xsl" event={"ID":"ade78eac-6799-49f4-b0ea-2f3dcb21273e","Type":"ContainerDied","Data":"9e2bdeffa8a165fe95edf61087a0f3f330b590c20cac1b5409ef71c4c21879df"} Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.063810 4675 generic.go:334] "Generic (PLEG): container finished" podID="147543ec-f687-430c-8a42-547c5861dbf4" containerID="974d7fcdae70428bd478be3b3521612bb5892f56ced9fe76c6f84ebdcecc2fc2" exitCode=0 Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.063871 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.063899 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gqpfm" event={"ID":"147543ec-f687-430c-8a42-547c5861dbf4","Type":"ContainerDied","Data":"974d7fcdae70428bd478be3b3521612bb5892f56ced9fe76c6f84ebdcecc2fc2"} Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.081173 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.093850 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wmshq"] Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.239759 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30abf472-a311-44dd-9853-cace1a1c41a9-etc-swift\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.239858 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-ring-data-devices\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.239894 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-swiftconf\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.239937 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4vn4\" (UniqueName: \"kubernetes.io/projected/30abf472-a311-44dd-9853-cace1a1c41a9-kube-api-access-w4vn4\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.239969 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-combined-ca-bundle\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.240016 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-dispersionconf\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.240074 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-scripts\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.240763 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-scripts" (OuterVolumeSpecName: "scripts") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.241581 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30abf472-a311-44dd-9853-cace1a1c41a9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.242448 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.245082 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30abf472-a311-44dd-9853-cace1a1c41a9-kube-api-access-w4vn4" (OuterVolumeSpecName: "kube-api-access-w4vn4") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "kube-api-access-w4vn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.246700 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.247840 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.255337 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.260529 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sz46b"] Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341570 4675 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341615 4675 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341627 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4vn4\" (UniqueName: \"kubernetes.io/projected/30abf472-a311-44dd-9853-cace1a1c41a9-kube-api-access-w4vn4\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341641 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341652 4675 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341662 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341672 4675 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30abf472-a311-44dd-9853-cace1a1c41a9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.077436 4675 generic.go:334] "Generic (PLEG): container finished" podID="7cb298f5-b4c8-42df-8a3f-c1458b89e443" containerID="cf93369f45b95439f48ef44ae1c4d7acc85ac8a88c7301daa8df8a93d1811848" exitCode=0 Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.077629 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wmshq" event={"ID":"7cb298f5-b4c8-42df-8a3f-c1458b89e443","Type":"ContainerDied","Data":"cf93369f45b95439f48ef44ae1c4d7acc85ac8a88c7301daa8df8a93d1811848"} Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.077839 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wmshq" event={"ID":"7cb298f5-b4c8-42df-8a3f-c1458b89e443","Type":"ContainerStarted","Data":"89bf972e71483a180f3e39a4363040b61fbdded1daff7ef3862df0951bf7d9c1"} Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.081158 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sz46b" event={"ID":"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8","Type":"ContainerStarted","Data":"ebac677078572d2fe1c4d4efa213085362240fb07f0ab2327d75b7ba1eb6c2d8"} Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.081220 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.136374 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-zjwxg"] Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.142087 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-zjwxg"] Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.661020 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.668347 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.762821 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.765849 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhpf2\" (UniqueName: \"kubernetes.io/projected/ade78eac-6799-49f4-b0ea-2f3dcb21273e-kube-api-access-bhpf2\") pod \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.765896 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czznt\" (UniqueName: \"kubernetes.io/projected/147543ec-f687-430c-8a42-547c5861dbf4-kube-api-access-czznt\") pod \"147543ec-f687-430c-8a42-547c5861dbf4\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.765955 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147543ec-f687-430c-8a42-547c5861dbf4-operator-scripts\") pod \"147543ec-f687-430c-8a42-547c5861dbf4\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.766014 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade78eac-6799-49f4-b0ea-2f3dcb21273e-operator-scripts\") pod \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.766632 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/147543ec-f687-430c-8a42-547c5861dbf4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "147543ec-f687-430c-8a42-547c5861dbf4" (UID: "147543ec-f687-430c-8a42-547c5861dbf4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.766678 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ade78eac-6799-49f4-b0ea-2f3dcb21273e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ade78eac-6799-49f4-b0ea-2f3dcb21273e" (UID: "ade78eac-6799-49f4-b0ea-2f3dcb21273e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.787289 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade78eac-6799-49f4-b0ea-2f3dcb21273e-kube-api-access-bhpf2" (OuterVolumeSpecName: "kube-api-access-bhpf2") pod "ade78eac-6799-49f4-b0ea-2f3dcb21273e" (UID: "ade78eac-6799-49f4-b0ea-2f3dcb21273e"). InnerVolumeSpecName "kube-api-access-bhpf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.795178 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147543ec-f687-430c-8a42-547c5861dbf4-kube-api-access-czznt" (OuterVolumeSpecName: "kube-api-access-czznt") pod "147543ec-f687-430c-8a42-547c5861dbf4" (UID: "147543ec-f687-430c-8a42-547c5861dbf4"). InnerVolumeSpecName "kube-api-access-czznt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.868343 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhpf2\" (UniqueName: \"kubernetes.io/projected/ade78eac-6799-49f4-b0ea-2f3dcb21273e-kube-api-access-bhpf2\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.868384 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czznt\" (UniqueName: \"kubernetes.io/projected/147543ec-f687-430c-8a42-547c5861dbf4-kube-api-access-czznt\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.868394 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147543ec-f687-430c-8a42-547c5861dbf4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.868403 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade78eac-6799-49f4-b0ea-2f3dcb21273e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.925729 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-s7r45"] Jan 24 07:11:07 crc kubenswrapper[4675]: E0124 07:11:07.926167 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147543ec-f687-430c-8a42-547c5861dbf4" containerName="mariadb-database-create" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.926182 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="147543ec-f687-430c-8a42-547c5861dbf4" containerName="mariadb-database-create" Jan 24 07:11:07 crc kubenswrapper[4675]: E0124 07:11:07.926202 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade78eac-6799-49f4-b0ea-2f3dcb21273e" containerName="mariadb-account-create-update" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.926210 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade78eac-6799-49f4-b0ea-2f3dcb21273e" containerName="mariadb-account-create-update" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.930086 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade78eac-6799-49f4-b0ea-2f3dcb21273e" containerName="mariadb-account-create-update" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.930124 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="147543ec-f687-430c-8a42-547c5861dbf4" containerName="mariadb-database-create" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.936392 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.964239 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-s7r45"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.021265 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1e77-account-create-update-7b985"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.022178 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.023812 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.043745 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1e77-account-create-update-7b985"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.071378 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b2533f-cb15-4581-84c1-81235b34bfe5-operator-scripts\") pod \"keystone-db-create-s7r45\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.071899 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mchp\" (UniqueName: \"kubernetes.io/projected/33b2533f-cb15-4581-84c1-81235b34bfe5-kube-api-access-6mchp\") pod \"keystone-db-create-s7r45\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.098564 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e5bb-account-create-update-r9xsl" event={"ID":"ade78eac-6799-49f4-b0ea-2f3dcb21273e","Type":"ContainerDied","Data":"cc90ef57ed2f268af367d04ae88b666b28bec4ef07715447864520bd64348a57"} Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.098599 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc90ef57ed2f268af367d04ae88b666b28bec4ef07715447864520bd64348a57" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.099185 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.103741 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.103811 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gqpfm" event={"ID":"147543ec-f687-430c-8a42-547c5861dbf4","Type":"ContainerDied","Data":"e974e7137e990a8ee4f0ea9197e564147f42d5e1ef614ef9871d8fa4811b5b07"} Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.103837 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e974e7137e990a8ee4f0ea9197e564147f42d5e1ef614ef9871d8fa4811b5b07" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.173073 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mchp\" (UniqueName: \"kubernetes.io/projected/33b2533f-cb15-4581-84c1-81235b34bfe5-kube-api-access-6mchp\") pod \"keystone-db-create-s7r45\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.173160 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-operator-scripts\") pod \"keystone-1e77-account-create-update-7b985\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.173192 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhn6r\" (UniqueName: \"kubernetes.io/projected/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-kube-api-access-fhn6r\") pod \"keystone-1e77-account-create-update-7b985\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.173223 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b2533f-cb15-4581-84c1-81235b34bfe5-operator-scripts\") pod \"keystone-db-create-s7r45\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.173937 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b2533f-cb15-4581-84c1-81235b34bfe5-operator-scripts\") pod \"keystone-db-create-s7r45\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.213983 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mchp\" (UniqueName: \"kubernetes.io/projected/33b2533f-cb15-4581-84c1-81235b34bfe5-kube-api-access-6mchp\") pod \"keystone-db-create-s7r45\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.232033 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zh8n7"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.235008 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.253647 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zh8n7"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.259183 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.275094 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-operator-scripts\") pod \"keystone-1e77-account-create-update-7b985\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.275142 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhn6r\" (UniqueName: \"kubernetes.io/projected/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-kube-api-access-fhn6r\") pod \"keystone-1e77-account-create-update-7b985\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.277382 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-operator-scripts\") pod \"keystone-1e77-account-create-update-7b985\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.296417 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhn6r\" (UniqueName: \"kubernetes.io/projected/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-kube-api-access-fhn6r\") pod \"keystone-1e77-account-create-update-7b985\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.341737 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.361762 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1ef3-account-create-update-txcmj"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.364262 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.368385 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.376242 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2sfx\" (UniqueName: \"kubernetes.io/projected/45f87016-197d-4a38-94d7-4c7828af8ee3-kube-api-access-l2sfx\") pod \"placement-db-create-zh8n7\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.376354 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f87016-197d-4a38-94d7-4c7828af8ee3-operator-scripts\") pod \"placement-db-create-zh8n7\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.384852 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1ef3-account-create-update-txcmj"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.477399 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66b11fd-5bd9-4ba0-bd60-b370a709be63-operator-scripts\") pod \"placement-1ef3-account-create-update-txcmj\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.477469 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f87016-197d-4a38-94d7-4c7828af8ee3-operator-scripts\") pod \"placement-db-create-zh8n7\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.477527 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pxhg\" (UniqueName: \"kubernetes.io/projected/f66b11fd-5bd9-4ba0-bd60-b370a709be63-kube-api-access-5pxhg\") pod \"placement-1ef3-account-create-update-txcmj\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.477566 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2sfx\" (UniqueName: \"kubernetes.io/projected/45f87016-197d-4a38-94d7-4c7828af8ee3-kube-api-access-l2sfx\") pod \"placement-db-create-zh8n7\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.480880 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f87016-197d-4a38-94d7-4c7828af8ee3-operator-scripts\") pod \"placement-db-create-zh8n7\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.507621 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2sfx\" (UniqueName: \"kubernetes.io/projected/45f87016-197d-4a38-94d7-4c7828af8ee3-kube-api-access-l2sfx\") pod \"placement-db-create-zh8n7\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.569090 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.579038 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66b11fd-5bd9-4ba0-bd60-b370a709be63-operator-scripts\") pod \"placement-1ef3-account-create-update-txcmj\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.579185 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pxhg\" (UniqueName: \"kubernetes.io/projected/f66b11fd-5bd9-4ba0-bd60-b370a709be63-kube-api-access-5pxhg\") pod \"placement-1ef3-account-create-update-txcmj\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.580594 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66b11fd-5bd9-4ba0-bd60-b370a709be63-operator-scripts\") pod \"placement-1ef3-account-create-update-txcmj\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.599308 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pxhg\" (UniqueName: \"kubernetes.io/projected/f66b11fd-5bd9-4ba0-bd60-b370a709be63-kube-api-access-5pxhg\") pod \"placement-1ef3-account-create-update-txcmj\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.685509 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.958920 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30abf472-a311-44dd-9853-cace1a1c41a9" path="/var/lib/kubelet/pods/30abf472-a311-44dd-9853-cace1a1c41a9/volumes" Jan 24 07:11:09 crc kubenswrapper[4675]: I0124 07:11:09.493304 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:09 crc kubenswrapper[4675]: E0124 07:11:09.493548 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 07:11:09 crc kubenswrapper[4675]: E0124 07:11:09.493561 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 07:11:09 crc kubenswrapper[4675]: E0124 07:11:09.493603 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift podName:cf53054f-7616-43d6-9aeb-eb5f880b6e40 nodeName:}" failed. No retries permitted until 2026-01-24 07:11:17.493586787 +0000 UTC m=+1078.789692010 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift") pod "swift-storage-0" (UID: "cf53054f-7616-43d6-9aeb-eb5f880b6e40") : configmap "swift-ring-files" not found Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.030859 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.133492 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wmshq" event={"ID":"7cb298f5-b4c8-42df-8a3f-c1458b89e443","Type":"ContainerDied","Data":"89bf972e71483a180f3e39a4363040b61fbdded1daff7ef3862df0951bf7d9c1"} Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.133532 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89bf972e71483a180f3e39a4363040b61fbdded1daff7ef3862df0951bf7d9c1" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.133599 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.204884 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkb86\" (UniqueName: \"kubernetes.io/projected/7cb298f5-b4c8-42df-8a3f-c1458b89e443-kube-api-access-jkb86\") pod \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.205015 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb298f5-b4c8-42df-8a3f-c1458b89e443-operator-scripts\") pod \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.206513 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb298f5-b4c8-42df-8a3f-c1458b89e443-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7cb298f5-b4c8-42df-8a3f-c1458b89e443" (UID: "7cb298f5-b4c8-42df-8a3f-c1458b89e443"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.226360 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb298f5-b4c8-42df-8a3f-c1458b89e443-kube-api-access-jkb86" (OuterVolumeSpecName: "kube-api-access-jkb86") pod "7cb298f5-b4c8-42df-8a3f-c1458b89e443" (UID: "7cb298f5-b4c8-42df-8a3f-c1458b89e443"). InnerVolumeSpecName "kube-api-access-jkb86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.307274 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkb86\" (UniqueName: \"kubernetes.io/projected/7cb298f5-b4c8-42df-8a3f-c1458b89e443-kube-api-access-jkb86\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.307308 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb298f5-b4c8-42df-8a3f-c1458b89e443-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.642441 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.699378 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8gnzm"] Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.699593 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-8gnzm" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerName="dnsmasq-dns" containerID="cri-o://b564f9e16029d5ca253f041ec1a749aa789ea08cb8ab0c837db92a65d8468a91" gracePeriod=10 Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.146195 4675 generic.go:334] "Generic (PLEG): container finished" podID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerID="b564f9e16029d5ca253f041ec1a749aa789ea08cb8ab0c837db92a65d8468a91" exitCode=0 Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.146210 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8gnzm" event={"ID":"d380ff5f-2ad7-495e-8cd4-2df178c2cd02","Type":"ContainerDied","Data":"b564f9e16029d5ca253f041ec1a749aa789ea08cb8ab0c837db92a65d8468a91"} Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.553088 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wmshq"] Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.565381 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wmshq"] Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.815975 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:11:11 crc kubenswrapper[4675]: W0124 07:11:11.906023 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf66b11fd_5bd9_4ba0_bd60_b370a709be63.slice/crio-25d9a2ece04b3f2bc31986afc779539391ad52f7e854e2010c215d9f249d8bbe WatchSource:0}: Error finding container 25d9a2ece04b3f2bc31986afc779539391ad52f7e854e2010c215d9f249d8bbe: Status 404 returned error can't find the container with id 25d9a2ece04b3f2bc31986afc779539391ad52f7e854e2010c215d9f249d8bbe Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.907908 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1ef3-account-create-update-txcmj"] Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.935763 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-config\") pod \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.935858 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-dns-svc\") pod \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.935887 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-sb\") pod \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.935952 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sgk5\" (UniqueName: \"kubernetes.io/projected/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-kube-api-access-2sgk5\") pod \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.935999 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-nb\") pod \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.944898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-kube-api-access-2sgk5" (OuterVolumeSpecName: "kube-api-access-2sgk5") pod "d380ff5f-2ad7-495e-8cd4-2df178c2cd02" (UID: "d380ff5f-2ad7-495e-8cd4-2df178c2cd02"). InnerVolumeSpecName "kube-api-access-2sgk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.979853 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d380ff5f-2ad7-495e-8cd4-2df178c2cd02" (UID: "d380ff5f-2ad7-495e-8cd4-2df178c2cd02"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.984756 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d380ff5f-2ad7-495e-8cd4-2df178c2cd02" (UID: "d380ff5f-2ad7-495e-8cd4-2df178c2cd02"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.987565 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d380ff5f-2ad7-495e-8cd4-2df178c2cd02" (UID: "d380ff5f-2ad7-495e-8cd4-2df178c2cd02"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.991423 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-config" (OuterVolumeSpecName: "config") pod "d380ff5f-2ad7-495e-8cd4-2df178c2cd02" (UID: "d380ff5f-2ad7-495e-8cd4-2df178c2cd02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.038234 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.038469 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.038486 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sgk5\" (UniqueName: \"kubernetes.io/projected/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-kube-api-access-2sgk5\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.038495 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.038504 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.055451 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1e77-account-create-update-7b985"] Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.065758 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-s7r45"] Jan 24 07:11:12 crc kubenswrapper[4675]: W0124 07:11:12.082071 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33b2533f_cb15_4581_84c1_81235b34bfe5.slice/crio-9a715cf6c14d67b86c4fde2dc3a0bbca99678dbef210c14c8b94f5f898b1c93d WatchSource:0}: Error finding container 9a715cf6c14d67b86c4fde2dc3a0bbca99678dbef210c14c8b94f5f898b1c93d: Status 404 returned error can't find the container with id 9a715cf6c14d67b86c4fde2dc3a0bbca99678dbef210c14c8b94f5f898b1c93d Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.103209 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zh8n7"] Jan 24 07:11:12 crc kubenswrapper[4675]: W0124 07:11:12.107689 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45f87016_197d_4a38_94d7_4c7828af8ee3.slice/crio-433bd184e908b3bc8287e83e9be377f9f2ff0b347e8b23d98c413ad90c9db8a1 WatchSource:0}: Error finding container 433bd184e908b3bc8287e83e9be377f9f2ff0b347e8b23d98c413ad90c9db8a1: Status 404 returned error can't find the container with id 433bd184e908b3bc8287e83e9be377f9f2ff0b347e8b23d98c413ad90c9db8a1 Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.165577 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sz46b" event={"ID":"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8","Type":"ContainerStarted","Data":"1bfb5aaca42be58bec27fbd4186467ef6026e0f9acdf2a15909cb65d6b4b387c"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.172191 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e77-account-create-update-7b985" event={"ID":"5b33dcb3-da61-44f3-9666-2b4afb90b9cd","Type":"ContainerStarted","Data":"47e09f6052ac6920e57c50185bbdc16b1a1efdb54f1858d643790403835da9ca"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.173868 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ef3-account-create-update-txcmj" event={"ID":"f66b11fd-5bd9-4ba0-bd60-b370a709be63","Type":"ContainerStarted","Data":"76e5054851b4909dbeb1cd4deac1c991823b0d9876d867ee5c156baf2fa53d30"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.173898 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ef3-account-create-update-txcmj" event={"ID":"f66b11fd-5bd9-4ba0-bd60-b370a709be63","Type":"ContainerStarted","Data":"25d9a2ece04b3f2bc31986afc779539391ad52f7e854e2010c215d9f249d8bbe"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.175572 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zh8n7" event={"ID":"45f87016-197d-4a38-94d7-4c7828af8ee3","Type":"ContainerStarted","Data":"433bd184e908b3bc8287e83e9be377f9f2ff0b347e8b23d98c413ad90c9db8a1"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.176669 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s7r45" event={"ID":"33b2533f-cb15-4581-84c1-81235b34bfe5","Type":"ContainerStarted","Data":"9a715cf6c14d67b86c4fde2dc3a0bbca99678dbef210c14c8b94f5f898b1c93d"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.185609 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-sz46b" podStartSLOduration=2.02857011 podStartE2EDuration="7.185594336s" podCreationTimestamp="2026-01-24 07:11:05 +0000 UTC" firstStartedPulling="2026-01-24 07:11:06.267518364 +0000 UTC m=+1067.563623587" lastFinishedPulling="2026-01-24 07:11:11.42454259 +0000 UTC m=+1072.720647813" observedRunningTime="2026-01-24 07:11:12.178614458 +0000 UTC m=+1073.474719681" watchObservedRunningTime="2026-01-24 07:11:12.185594336 +0000 UTC m=+1073.481699559" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.187533 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8gnzm" event={"ID":"d380ff5f-2ad7-495e-8cd4-2df178c2cd02","Type":"ContainerDied","Data":"21e3c05cf504ff346743ae080d530a42394925a2cfab5c9696da02df66a7400b"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.187571 4675 scope.go:117] "RemoveContainer" containerID="b564f9e16029d5ca253f041ec1a749aa789ea08cb8ab0c837db92a65d8468a91" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.187703 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.201222 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-1ef3-account-create-update-txcmj" podStartSLOduration=4.201209523 podStartE2EDuration="4.201209523s" podCreationTimestamp="2026-01-24 07:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:12.198454086 +0000 UTC m=+1073.494559309" watchObservedRunningTime="2026-01-24 07:11:12.201209523 +0000 UTC m=+1073.497314736" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.215148 4675 scope.go:117] "RemoveContainer" containerID="09845de0d6cfe71d82646bcb54bb06d9aef2cb9d4719b6cc5abade59a5012412" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.232532 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8gnzm"] Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.241426 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8gnzm"] Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.955247 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb298f5-b4c8-42df-8a3f-c1458b89e443" path="/var/lib/kubelet/pods/7cb298f5-b4c8-42df-8a3f-c1458b89e443/volumes" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.956702 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" path="/var/lib/kubelet/pods/d380ff5f-2ad7-495e-8cd4-2df178c2cd02/volumes" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.201326 4675 generic.go:334] "Generic (PLEG): container finished" podID="33b2533f-cb15-4581-84c1-81235b34bfe5" containerID="cb5f5de19b4ad05d5cb260b67a7ffda59880a5be1b09d0c5d743d36c1be22ba3" exitCode=0 Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.201443 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s7r45" event={"ID":"33b2533f-cb15-4581-84c1-81235b34bfe5","Type":"ContainerDied","Data":"cb5f5de19b4ad05d5cb260b67a7ffda59880a5be1b09d0c5d743d36c1be22ba3"} Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.205914 4675 generic.go:334] "Generic (PLEG): container finished" podID="5b33dcb3-da61-44f3-9666-2b4afb90b9cd" containerID="f1cbd2804e3c921d0862ddd3c3e25da9a0eb08d8f218d2fcc9340af63efc5b69" exitCode=0 Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.205997 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e77-account-create-update-7b985" event={"ID":"5b33dcb3-da61-44f3-9666-2b4afb90b9cd","Type":"ContainerDied","Data":"f1cbd2804e3c921d0862ddd3c3e25da9a0eb08d8f218d2fcc9340af63efc5b69"} Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.207956 4675 generic.go:334] "Generic (PLEG): container finished" podID="f66b11fd-5bd9-4ba0-bd60-b370a709be63" containerID="76e5054851b4909dbeb1cd4deac1c991823b0d9876d867ee5c156baf2fa53d30" exitCode=0 Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.207990 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ef3-account-create-update-txcmj" event={"ID":"f66b11fd-5bd9-4ba0-bd60-b370a709be63","Type":"ContainerDied","Data":"76e5054851b4909dbeb1cd4deac1c991823b0d9876d867ee5c156baf2fa53d30"} Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.209788 4675 generic.go:334] "Generic (PLEG): container finished" podID="45f87016-197d-4a38-94d7-4c7828af8ee3" containerID="2410d88c73d46b104c7a96605edcd69c1a2ae6d7410fac2b2340c43785d9bc0e" exitCode=0 Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.209868 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zh8n7" event={"ID":"45f87016-197d-4a38-94d7-4c7828af8ee3","Type":"ContainerDied","Data":"2410d88c73d46b104c7a96605edcd69c1a2ae6d7410fac2b2340c43785d9bc0e"} Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.644054 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-95xkb"] Jan 24 07:11:13 crc kubenswrapper[4675]: E0124 07:11:13.644452 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb298f5-b4c8-42df-8a3f-c1458b89e443" containerName="mariadb-account-create-update" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.644468 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb298f5-b4c8-42df-8a3f-c1458b89e443" containerName="mariadb-account-create-update" Jan 24 07:11:13 crc kubenswrapper[4675]: E0124 07:11:13.644478 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerName="dnsmasq-dns" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.644485 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerName="dnsmasq-dns" Jan 24 07:11:13 crc kubenswrapper[4675]: E0124 07:11:13.644501 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerName="init" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.644507 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerName="init" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.644662 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerName="dnsmasq-dns" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.644674 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb298f5-b4c8-42df-8a3f-c1458b89e443" containerName="mariadb-account-create-update" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.645158 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.646872 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.647097 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wwtw8" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.667748 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-95xkb"] Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.772893 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-combined-ca-bundle\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.773030 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6jbt\" (UniqueName: \"kubernetes.io/projected/7e53c5a1-6293-46d9-9783-e7d183050152-kube-api-access-f6jbt\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.773087 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-db-sync-config-data\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.773152 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-config-data\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.874985 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6jbt\" (UniqueName: \"kubernetes.io/projected/7e53c5a1-6293-46d9-9783-e7d183050152-kube-api-access-f6jbt\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.875048 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-db-sync-config-data\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.875101 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-config-data\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.875177 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-combined-ca-bundle\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.880673 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-config-data\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.881791 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-combined-ca-bundle\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.886599 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-db-sync-config-data\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.893918 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6jbt\" (UniqueName: \"kubernetes.io/projected/7e53c5a1-6293-46d9-9783-e7d183050152-kube-api-access-f6jbt\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.968997 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.510189 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.608940 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-95xkb"] Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.689478 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66b11fd-5bd9-4ba0-bd60-b370a709be63-operator-scripts\") pod \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.689559 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pxhg\" (UniqueName: \"kubernetes.io/projected/f66b11fd-5bd9-4ba0-bd60-b370a709be63-kube-api-access-5pxhg\") pod \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.690712 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f66b11fd-5bd9-4ba0-bd60-b370a709be63-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f66b11fd-5bd9-4ba0-bd60-b370a709be63" (UID: "f66b11fd-5bd9-4ba0-bd60-b370a709be63"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.701423 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66b11fd-5bd9-4ba0-bd60-b370a709be63-kube-api-access-5pxhg" (OuterVolumeSpecName: "kube-api-access-5pxhg") pod "f66b11fd-5bd9-4ba0-bd60-b370a709be63" (UID: "f66b11fd-5bd9-4ba0-bd60-b370a709be63"). InnerVolumeSpecName "kube-api-access-5pxhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.703079 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.714642 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.791860 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f87016-197d-4a38-94d7-4c7828af8ee3-operator-scripts\") pod \"45f87016-197d-4a38-94d7-4c7828af8ee3\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.791944 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2sfx\" (UniqueName: \"kubernetes.io/projected/45f87016-197d-4a38-94d7-4c7828af8ee3-kube-api-access-l2sfx\") pod \"45f87016-197d-4a38-94d7-4c7828af8ee3\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.792321 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66b11fd-5bd9-4ba0-bd60-b370a709be63-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.792332 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pxhg\" (UniqueName: \"kubernetes.io/projected/f66b11fd-5bd9-4ba0-bd60-b370a709be63-kube-api-access-5pxhg\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.793025 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45f87016-197d-4a38-94d7-4c7828af8ee3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45f87016-197d-4a38-94d7-4c7828af8ee3" (UID: "45f87016-197d-4a38-94d7-4c7828af8ee3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.795920 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f87016-197d-4a38-94d7-4c7828af8ee3-kube-api-access-l2sfx" (OuterVolumeSpecName: "kube-api-access-l2sfx") pod "45f87016-197d-4a38-94d7-4c7828af8ee3" (UID: "45f87016-197d-4a38-94d7-4c7828af8ee3"). InnerVolumeSpecName "kube-api-access-l2sfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.813111 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.892864 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b2533f-cb15-4581-84c1-81235b34bfe5-operator-scripts\") pod \"33b2533f-cb15-4581-84c1-81235b34bfe5\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.892921 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-operator-scripts\") pod \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.892944 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mchp\" (UniqueName: \"kubernetes.io/projected/33b2533f-cb15-4581-84c1-81235b34bfe5-kube-api-access-6mchp\") pod \"33b2533f-cb15-4581-84c1-81235b34bfe5\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.892990 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhn6r\" (UniqueName: \"kubernetes.io/projected/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-kube-api-access-fhn6r\") pod \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.893300 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f87016-197d-4a38-94d7-4c7828af8ee3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.893316 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2sfx\" (UniqueName: \"kubernetes.io/projected/45f87016-197d-4a38-94d7-4c7828af8ee3-kube-api-access-l2sfx\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.893984 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b33dcb3-da61-44f3-9666-2b4afb90b9cd" (UID: "5b33dcb3-da61-44f3-9666-2b4afb90b9cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.894387 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33b2533f-cb15-4581-84c1-81235b34bfe5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33b2533f-cb15-4581-84c1-81235b34bfe5" (UID: "33b2533f-cb15-4581-84c1-81235b34bfe5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.902161 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-kube-api-access-fhn6r" (OuterVolumeSpecName: "kube-api-access-fhn6r") pod "5b33dcb3-da61-44f3-9666-2b4afb90b9cd" (UID: "5b33dcb3-da61-44f3-9666-2b4afb90b9cd"). InnerVolumeSpecName "kube-api-access-fhn6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.902693 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b2533f-cb15-4581-84c1-81235b34bfe5-kube-api-access-6mchp" (OuterVolumeSpecName: "kube-api-access-6mchp") pod "33b2533f-cb15-4581-84c1-81235b34bfe5" (UID: "33b2533f-cb15-4581-84c1-81235b34bfe5"). InnerVolumeSpecName "kube-api-access-6mchp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.046340 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b2533f-cb15-4581-84c1-81235b34bfe5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.046402 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.046413 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mchp\" (UniqueName: \"kubernetes.io/projected/33b2533f-cb15-4581-84c1-81235b34bfe5-kube-api-access-6mchp\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.046428 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhn6r\" (UniqueName: \"kubernetes.io/projected/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-kube-api-access-fhn6r\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.286609 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-95xkb" event={"ID":"7e53c5a1-6293-46d9-9783-e7d183050152","Type":"ContainerStarted","Data":"aba894555e46ec22e81cf8b996b4a30e472efbd5119d3ffa6f69ee65a8d156ee"} Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.288298 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s7r45" event={"ID":"33b2533f-cb15-4581-84c1-81235b34bfe5","Type":"ContainerDied","Data":"9a715cf6c14d67b86c4fde2dc3a0bbca99678dbef210c14c8b94f5f898b1c93d"} Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.288322 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a715cf6c14d67b86c4fde2dc3a0bbca99678dbef210c14c8b94f5f898b1c93d" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.288440 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.289546 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e77-account-create-update-7b985" event={"ID":"5b33dcb3-da61-44f3-9666-2b4afb90b9cd","Type":"ContainerDied","Data":"47e09f6052ac6920e57c50185bbdc16b1a1efdb54f1858d643790403835da9ca"} Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.289578 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47e09f6052ac6920e57c50185bbdc16b1a1efdb54f1858d643790403835da9ca" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.289640 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.295198 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ef3-account-create-update-txcmj" event={"ID":"f66b11fd-5bd9-4ba0-bd60-b370a709be63","Type":"ContainerDied","Data":"25d9a2ece04b3f2bc31986afc779539391ad52f7e854e2010c215d9f249d8bbe"} Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.295223 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25d9a2ece04b3f2bc31986afc779539391ad52f7e854e2010c215d9f249d8bbe" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.295269 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.297524 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zh8n7" event={"ID":"45f87016-197d-4a38-94d7-4c7828af8ee3","Type":"ContainerDied","Data":"433bd184e908b3bc8287e83e9be377f9f2ff0b347e8b23d98c413ad90c9db8a1"} Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.297544 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="433bd184e908b3bc8287e83e9be377f9f2ff0b347e8b23d98c413ad90c9db8a1" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.297583 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.321547 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-h2rs4"] Jan 24 07:11:15 crc kubenswrapper[4675]: E0124 07:11:15.321818 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b2533f-cb15-4581-84c1-81235b34bfe5" containerName="mariadb-database-create" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.321834 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b2533f-cb15-4581-84c1-81235b34bfe5" containerName="mariadb-database-create" Jan 24 07:11:15 crc kubenswrapper[4675]: E0124 07:11:15.321856 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66b11fd-5bd9-4ba0-bd60-b370a709be63" containerName="mariadb-account-create-update" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.321863 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66b11fd-5bd9-4ba0-bd60-b370a709be63" containerName="mariadb-account-create-update" Jan 24 07:11:15 crc kubenswrapper[4675]: E0124 07:11:15.321871 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f87016-197d-4a38-94d7-4c7828af8ee3" containerName="mariadb-database-create" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.321878 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f87016-197d-4a38-94d7-4c7828af8ee3" containerName="mariadb-database-create" Jan 24 07:11:15 crc kubenswrapper[4675]: E0124 07:11:15.321890 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b33dcb3-da61-44f3-9666-2b4afb90b9cd" containerName="mariadb-account-create-update" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.321895 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b33dcb3-da61-44f3-9666-2b4afb90b9cd" containerName="mariadb-account-create-update" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.322032 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66b11fd-5bd9-4ba0-bd60-b370a709be63" containerName="mariadb-account-create-update" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.322044 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f87016-197d-4a38-94d7-4c7828af8ee3" containerName="mariadb-database-create" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.322051 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b2533f-cb15-4581-84c1-81235b34bfe5" containerName="mariadb-database-create" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.322059 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b33dcb3-da61-44f3-9666-2b4afb90b9cd" containerName="mariadb-account-create-update" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.322515 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.328138 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.342886 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h2rs4"] Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.351794 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzf4q\" (UniqueName: \"kubernetes.io/projected/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-kube-api-access-nzf4q\") pod \"root-account-create-update-h2rs4\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.351857 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-operator-scripts\") pod \"root-account-create-update-h2rs4\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.452852 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-operator-scripts\") pod \"root-account-create-update-h2rs4\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.452973 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzf4q\" (UniqueName: \"kubernetes.io/projected/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-kube-api-access-nzf4q\") pod \"root-account-create-update-h2rs4\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.455060 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-operator-scripts\") pod \"root-account-create-update-h2rs4\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.468979 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzf4q\" (UniqueName: \"kubernetes.io/projected/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-kube-api-access-nzf4q\") pod \"root-account-create-update-h2rs4\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.642023 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:16 crc kubenswrapper[4675]: I0124 07:11:16.085860 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h2rs4"] Jan 24 07:11:16 crc kubenswrapper[4675]: W0124 07:11:16.095760 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb WatchSource:0}: Error finding container 1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb: Status 404 returned error can't find the container with id 1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb Jan 24 07:11:16 crc kubenswrapper[4675]: I0124 07:11:16.306962 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2rs4" event={"ID":"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad","Type":"ContainerStarted","Data":"d4a048de2d3fd4b88b2f20e705ec4ca23a40e930bd260b2ef49c5084f7b87b5b"} Jan 24 07:11:16 crc kubenswrapper[4675]: I0124 07:11:16.307301 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2rs4" event={"ID":"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad","Type":"ContainerStarted","Data":"1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb"} Jan 24 07:11:16 crc kubenswrapper[4675]: I0124 07:11:16.323328 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-h2rs4" podStartSLOduration=1.323312311 podStartE2EDuration="1.323312311s" podCreationTimestamp="2026-01-24 07:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:16.321969169 +0000 UTC m=+1077.618074392" watchObservedRunningTime="2026-01-24 07:11:16.323312311 +0000 UTC m=+1077.619417524" Jan 24 07:11:17 crc kubenswrapper[4675]: I0124 07:11:17.316176 4675 generic.go:334] "Generic (PLEG): container finished" podID="42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" containerID="d4a048de2d3fd4b88b2f20e705ec4ca23a40e930bd260b2ef49c5084f7b87b5b" exitCode=0 Jan 24 07:11:17 crc kubenswrapper[4675]: I0124 07:11:17.316218 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2rs4" event={"ID":"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad","Type":"ContainerDied","Data":"d4a048de2d3fd4b88b2f20e705ec4ca23a40e930bd260b2ef49c5084f7b87b5b"} Jan 24 07:11:17 crc kubenswrapper[4675]: I0124 07:11:17.586277 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:17 crc kubenswrapper[4675]: E0124 07:11:17.586468 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 07:11:17 crc kubenswrapper[4675]: E0124 07:11:17.586498 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 07:11:17 crc kubenswrapper[4675]: E0124 07:11:17.586549 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift podName:cf53054f-7616-43d6-9aeb-eb5f880b6e40 nodeName:}" failed. No retries permitted until 2026-01-24 07:11:33.586531743 +0000 UTC m=+1094.882636966 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift") pod "swift-storage-0" (UID: "cf53054f-7616-43d6-9aeb-eb5f880b6e40") : configmap "swift-ring-files" not found Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.336105 4675 generic.go:334] "Generic (PLEG): container finished" podID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerID="78ce6643db3a1b1549c4015afb11eee3ac5a9eb412378d961f3105790aac9761" exitCode=0 Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.336181 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ed4c9b-a365-46aa-95d7-7be5d2cc354a","Type":"ContainerDied","Data":"78ce6643db3a1b1549c4015afb11eee3ac5a9eb412378d961f3105790aac9761"} Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.340190 4675 generic.go:334] "Generic (PLEG): container finished" podID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerID="3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a" exitCode=0 Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.340343 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c","Type":"ContainerDied","Data":"3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a"} Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.614024 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2x2kb" podUID="b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1" containerName="ovn-controller" probeResult="failure" output=< Jan 24 07:11:18 crc kubenswrapper[4675]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 24 07:11:18 crc kubenswrapper[4675]: > Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.663235 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.698747 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.803945 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-operator-scripts\") pod \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.804071 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzf4q\" (UniqueName: \"kubernetes.io/projected/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-kube-api-access-nzf4q\") pod \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.804648 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" (UID: "42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.809485 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-kube-api-access-nzf4q" (OuterVolumeSpecName: "kube-api-access-nzf4q") pod "42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" (UID: "42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad"). InnerVolumeSpecName "kube-api-access-nzf4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.905228 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.905268 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzf4q\" (UniqueName: \"kubernetes.io/projected/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-kube-api-access-nzf4q\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.360854 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ed4c9b-a365-46aa-95d7-7be5d2cc354a","Type":"ContainerStarted","Data":"8b9f86fab7a581af646c89e80c3c7ca0ce4c63bf71b2b12b42c289f8f5551668"} Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.361086 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.373893 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c","Type":"ContainerStarted","Data":"0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075"} Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.374130 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.376302 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2rs4" event={"ID":"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad","Type":"ContainerDied","Data":"1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb"} Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.376368 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb" Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.376326 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.435206 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=56.135828106 podStartE2EDuration="1m6.435187326s" podCreationTimestamp="2026-01-24 07:10:13 +0000 UTC" firstStartedPulling="2026-01-24 07:10:30.144107867 +0000 UTC m=+1031.440213090" lastFinishedPulling="2026-01-24 07:10:40.443467067 +0000 UTC m=+1041.739572310" observedRunningTime="2026-01-24 07:11:19.431171609 +0000 UTC m=+1080.727276832" watchObservedRunningTime="2026-01-24 07:11:19.435187326 +0000 UTC m=+1080.731292549" Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.439274 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.93360354 podStartE2EDuration="1m5.439258184s" podCreationTimestamp="2026-01-24 07:10:14 +0000 UTC" firstStartedPulling="2026-01-24 07:10:30.705627582 +0000 UTC m=+1032.001732805" lastFinishedPulling="2026-01-24 07:10:41.211282196 +0000 UTC m=+1042.507387449" observedRunningTime="2026-01-24 07:11:19.399306373 +0000 UTC m=+1080.695411616" watchObservedRunningTime="2026-01-24 07:11:19.439258184 +0000 UTC m=+1080.735363407" Jan 24 07:11:20 crc kubenswrapper[4675]: I0124 07:11:20.405460 4675 generic.go:334] "Generic (PLEG): container finished" podID="57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" containerID="1bfb5aaca42be58bec27fbd4186467ef6026e0f9acdf2a15909cb65d6b4b387c" exitCode=0 Jan 24 07:11:20 crc kubenswrapper[4675]: I0124 07:11:20.405520 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sz46b" event={"ID":"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8","Type":"ContainerDied","Data":"1bfb5aaca42be58bec27fbd4186467ef6026e0f9acdf2a15909cb65d6b4b387c"} Jan 24 07:11:21 crc kubenswrapper[4675]: I0124 07:11:21.569312 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-h2rs4"] Jan 24 07:11:21 crc kubenswrapper[4675]: I0124 07:11:21.585886 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-h2rs4"] Jan 24 07:11:22 crc kubenswrapper[4675]: I0124 07:11:22.959830 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" path="/var/lib/kubelet/pods/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad/volumes" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.578020 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2x2kb" podUID="b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1" containerName="ovn-controller" probeResult="failure" output=< Jan 24 07:11:23 crc kubenswrapper[4675]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 24 07:11:23 crc kubenswrapper[4675]: > Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.610781 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:11:23 crc kubenswrapper[4675]: E0124 07:11:23.823032 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.851510 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2x2kb-config-9whd8"] Jan 24 07:11:23 crc kubenswrapper[4675]: E0124 07:11:23.851872 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" containerName="mariadb-account-create-update" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.851889 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" containerName="mariadb-account-create-update" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.852121 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" containerName="mariadb-account-create-update" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.852706 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.857335 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2x2kb-config-9whd8"] Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.859208 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.990345 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-scripts\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.990442 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-log-ovn\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.990467 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run-ovn\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.990582 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mhtv\" (UniqueName: \"kubernetes.io/projected/099e300a-0fba-4964-9d0b-34124522c8f3-kube-api-access-7mhtv\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.990659 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-additional-scripts\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.990704 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092235 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092290 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-scripts\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092342 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-log-ovn\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092362 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run-ovn\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092381 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mhtv\" (UniqueName: \"kubernetes.io/projected/099e300a-0fba-4964-9d0b-34124522c8f3-kube-api-access-7mhtv\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092442 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-additional-scripts\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092612 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092628 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run-ovn\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092692 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-log-ovn\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.093063 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-additional-scripts\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.094473 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-scripts\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.145690 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mhtv\" (UniqueName: \"kubernetes.io/projected/099e300a-0fba-4964-9d0b-34124522c8f3-kube-api-access-7mhtv\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.175371 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.588894 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8k7rv"] Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.590281 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.595015 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.616306 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8k7rv"] Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.732543 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbr2\" (UniqueName: \"kubernetes.io/projected/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-kube-api-access-pjbr2\") pod \"root-account-create-update-8k7rv\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.732608 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-operator-scripts\") pod \"root-account-create-update-8k7rv\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.833906 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbr2\" (UniqueName: \"kubernetes.io/projected/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-kube-api-access-pjbr2\") pod \"root-account-create-update-8k7rv\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.833988 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-operator-scripts\") pod \"root-account-create-update-8k7rv\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.834844 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-operator-scripts\") pod \"root-account-create-update-8k7rv\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.855668 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbr2\" (UniqueName: \"kubernetes.io/projected/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-kube-api-access-pjbr2\") pod \"root-account-create-update-8k7rv\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.913777 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:28 crc kubenswrapper[4675]: I0124 07:11:28.582585 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2x2kb" podUID="b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1" containerName="ovn-controller" probeResult="failure" output=< Jan 24 07:11:28 crc kubenswrapper[4675]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 24 07:11:28 crc kubenswrapper[4675]: > Jan 24 07:11:31 crc kubenswrapper[4675]: E0124 07:11:31.674755 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 24 07:11:31 crc kubenswrapper[4675]: E0124 07:11:31.675248 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6jbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-95xkb_openstack(7e53c5a1-6293-46d9-9783-e7d183050152): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:11:31 crc kubenswrapper[4675]: E0124 07:11:31.676422 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-95xkb" podUID="7e53c5a1-6293-46d9-9783-e7d183050152" Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.788510 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918345 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-swiftconf\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918435 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-scripts\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918459 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-etc-swift\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918523 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-combined-ca-bundle\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918549 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzzv7\" (UniqueName: \"kubernetes.io/projected/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-kube-api-access-nzzv7\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918575 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-dispersionconf\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918652 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-ring-data-devices\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.920316 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.920523 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.939259 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.951004 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-kube-api-access-nzzv7" (OuterVolumeSpecName: "kube-api-access-nzzv7") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "kube-api-access-nzzv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.974075 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-scripts" (OuterVolumeSpecName: "scripts") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.012860 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.017266 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020286 4675 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020306 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020320 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzzv7\" (UniqueName: \"kubernetes.io/projected/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-kube-api-access-nzzv7\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020330 4675 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020340 4675 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020350 4675 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020361 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.191362 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2x2kb-config-9whd8"] Jan 24 07:11:32 crc kubenswrapper[4675]: W0124 07:11:32.196958 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod099e300a_0fba_4964_9d0b_34124522c8f3.slice/crio-2c343fcde01b25a8f7d8ab2aa7d492ddd249226f04d2356230412270392cf122 WatchSource:0}: Error finding container 2c343fcde01b25a8f7d8ab2aa7d492ddd249226f04d2356230412270392cf122: Status 404 returned error can't find the container with id 2c343fcde01b25a8f7d8ab2aa7d492ddd249226f04d2356230412270392cf122 Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.251098 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8k7rv"] Jan 24 07:11:32 crc kubenswrapper[4675]: W0124 07:11:32.255343 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38c46b58_28e2_4896_8ae5_dc53cbe96ec9.slice/crio-3f27e7a1de19a3663d515cab7eb208f074ff2aedcaf28dcfba54a96ec83f8717 WatchSource:0}: Error finding container 3f27e7a1de19a3663d515cab7eb208f074ff2aedcaf28dcfba54a96ec83f8717: Status 404 returned error can't find the container with id 3f27e7a1de19a3663d515cab7eb208f074ff2aedcaf28dcfba54a96ec83f8717 Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.507673 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8k7rv" event={"ID":"38c46b58-28e2-4896-8ae5-dc53cbe96ec9","Type":"ContainerStarted","Data":"1a52396d2314002bfe722f95ecb36d5eaf563c649851e4153e001371ff49687c"} Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.508750 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8k7rv" event={"ID":"38c46b58-28e2-4896-8ae5-dc53cbe96ec9","Type":"ContainerStarted","Data":"3f27e7a1de19a3663d515cab7eb208f074ff2aedcaf28dcfba54a96ec83f8717"} Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.510115 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sz46b" event={"ID":"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8","Type":"ContainerDied","Data":"ebac677078572d2fe1c4d4efa213085362240fb07f0ab2327d75b7ba1eb6c2d8"} Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.510157 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebac677078572d2fe1c4d4efa213085362240fb07f0ab2327d75b7ba1eb6c2d8" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.510159 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.511856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2x2kb-config-9whd8" event={"ID":"099e300a-0fba-4964-9d0b-34124522c8f3","Type":"ContainerStarted","Data":"874bbdad57146cc137ef4243de0a7736d7fb10ae05c52ce16c88dd3f2052c38a"} Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.511902 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2x2kb-config-9whd8" event={"ID":"099e300a-0fba-4964-9d0b-34124522c8f3","Type":"ContainerStarted","Data":"2c343fcde01b25a8f7d8ab2aa7d492ddd249226f04d2356230412270392cf122"} Jan 24 07:11:32 crc kubenswrapper[4675]: E0124 07:11:32.513188 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-95xkb" podUID="7e53c5a1-6293-46d9-9783-e7d183050152" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.532918 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-8k7rv" podStartSLOduration=6.532902934 podStartE2EDuration="6.532902934s" podCreationTimestamp="2026-01-24 07:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:32.526872239 +0000 UTC m=+1093.822977462" watchObservedRunningTime="2026-01-24 07:11:32.532902934 +0000 UTC m=+1093.829008157" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.573164 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2x2kb-config-9whd8" podStartSLOduration=9.573145563 podStartE2EDuration="9.573145563s" podCreationTimestamp="2026-01-24 07:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:32.563256734 +0000 UTC m=+1093.859361967" watchObservedRunningTime="2026-01-24 07:11:32.573145563 +0000 UTC m=+1093.869250786" Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.518981 4675 generic.go:334] "Generic (PLEG): container finished" podID="38c46b58-28e2-4896-8ae5-dc53cbe96ec9" containerID="1a52396d2314002bfe722f95ecb36d5eaf563c649851e4153e001371ff49687c" exitCode=0 Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.519088 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8k7rv" event={"ID":"38c46b58-28e2-4896-8ae5-dc53cbe96ec9","Type":"ContainerDied","Data":"1a52396d2314002bfe722f95ecb36d5eaf563c649851e4153e001371ff49687c"} Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.521048 4675 generic.go:334] "Generic (PLEG): container finished" podID="099e300a-0fba-4964-9d0b-34124522c8f3" containerID="874bbdad57146cc137ef4243de0a7736d7fb10ae05c52ce16c88dd3f2052c38a" exitCode=0 Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.521094 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2x2kb-config-9whd8" event={"ID":"099e300a-0fba-4964-9d0b-34124522c8f3","Type":"ContainerDied","Data":"874bbdad57146cc137ef4243de0a7736d7fb10ae05c52ce16c88dd3f2052c38a"} Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.588151 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-2x2kb" Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.649931 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.660190 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.675951 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 24 07:11:34 crc kubenswrapper[4675]: E0124 07:11:34.051009 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.258703 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.529331 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"f74c1c5ecd4e950d117cb862a809c63f4eafa20c3cc7e0b767b66111f9f43cc0"} Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.863065 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.872530 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973372 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run\") pod \"099e300a-0fba-4964-9d0b-34124522c8f3\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973433 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-operator-scripts\") pod \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973467 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mhtv\" (UniqueName: \"kubernetes.io/projected/099e300a-0fba-4964-9d0b-34124522c8f3-kube-api-access-7mhtv\") pod \"099e300a-0fba-4964-9d0b-34124522c8f3\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973489 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-additional-scripts\") pod \"099e300a-0fba-4964-9d0b-34124522c8f3\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973497 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run" (OuterVolumeSpecName: "var-run") pod "099e300a-0fba-4964-9d0b-34124522c8f3" (UID: "099e300a-0fba-4964-9d0b-34124522c8f3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973527 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-log-ovn\") pod \"099e300a-0fba-4964-9d0b-34124522c8f3\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973612 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-scripts\") pod \"099e300a-0fba-4964-9d0b-34124522c8f3\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973643 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjbr2\" (UniqueName: \"kubernetes.io/projected/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-kube-api-access-pjbr2\") pod \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973671 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run-ovn\") pod \"099e300a-0fba-4964-9d0b-34124522c8f3\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973969 4675 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973641 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "099e300a-0fba-4964-9d0b-34124522c8f3" (UID: "099e300a-0fba-4964-9d0b-34124522c8f3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973993 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "099e300a-0fba-4964-9d0b-34124522c8f3" (UID: "099e300a-0fba-4964-9d0b-34124522c8f3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.974552 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "099e300a-0fba-4964-9d0b-34124522c8f3" (UID: "099e300a-0fba-4964-9d0b-34124522c8f3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.974806 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-scripts" (OuterVolumeSpecName: "scripts") pod "099e300a-0fba-4964-9d0b-34124522c8f3" (UID: "099e300a-0fba-4964-9d0b-34124522c8f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.975132 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38c46b58-28e2-4896-8ae5-dc53cbe96ec9" (UID: "38c46b58-28e2-4896-8ae5-dc53cbe96ec9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.979241 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099e300a-0fba-4964-9d0b-34124522c8f3-kube-api-access-7mhtv" (OuterVolumeSpecName: "kube-api-access-7mhtv") pod "099e300a-0fba-4964-9d0b-34124522c8f3" (UID: "099e300a-0fba-4964-9d0b-34124522c8f3"). InnerVolumeSpecName "kube-api-access-7mhtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.979656 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-kube-api-access-pjbr2" (OuterVolumeSpecName: "kube-api-access-pjbr2") pod "38c46b58-28e2-4896-8ae5-dc53cbe96ec9" (UID: "38c46b58-28e2-4896-8ae5-dc53cbe96ec9"). InnerVolumeSpecName "kube-api-access-pjbr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075207 4675 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075241 4675 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075250 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075264 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjbr2\" (UniqueName: \"kubernetes.io/projected/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-kube-api-access-pjbr2\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075275 4675 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075283 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075291 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mhtv\" (UniqueName: \"kubernetes.io/projected/099e300a-0fba-4964-9d0b-34124522c8f3-kube-api-access-7mhtv\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.134902 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.368536 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2x2kb-config-9whd8"] Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.375183 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2x2kb-config-9whd8"] Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.534886 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.549359 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8k7rv" event={"ID":"38c46b58-28e2-4896-8ae5-dc53cbe96ec9","Type":"ContainerDied","Data":"3f27e7a1de19a3663d515cab7eb208f074ff2aedcaf28dcfba54a96ec83f8717"} Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.549395 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f27e7a1de19a3663d515cab7eb208f074ff2aedcaf28dcfba54a96ec83f8717" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.549453 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.564434 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c343fcde01b25a8f7d8ab2aa7d492ddd249226f04d2356230412270392cf122" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.564499 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.816982 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-6lfkb"] Jan 24 07:11:35 crc kubenswrapper[4675]: E0124 07:11:35.823940 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" containerName="swift-ring-rebalance" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.824166 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" containerName="swift-ring-rebalance" Jan 24 07:11:35 crc kubenswrapper[4675]: E0124 07:11:35.824262 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c46b58-28e2-4896-8ae5-dc53cbe96ec9" containerName="mariadb-account-create-update" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.824335 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c46b58-28e2-4896-8ae5-dc53cbe96ec9" containerName="mariadb-account-create-update" Jan 24 07:11:35 crc kubenswrapper[4675]: E0124 07:11:35.824453 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099e300a-0fba-4964-9d0b-34124522c8f3" containerName="ovn-config" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.824534 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="099e300a-0fba-4964-9d0b-34124522c8f3" containerName="ovn-config" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.824788 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" containerName="swift-ring-rebalance" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.824909 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c46b58-28e2-4896-8ae5-dc53cbe96ec9" containerName="mariadb-account-create-update" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.825032 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="099e300a-0fba-4964-9d0b-34124522c8f3" containerName="ovn-config" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.825799 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.907410 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6lfkb"] Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.932372 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-391e-account-create-update-r55gs"] Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.933266 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.954229 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.987986 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqdkg\" (UniqueName: \"kubernetes.io/projected/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-kube-api-access-rqdkg\") pod \"cinder-db-create-6lfkb\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.988298 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-operator-scripts\") pod \"cinder-db-create-6lfkb\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.994862 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-391e-account-create-update-r55gs"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.090193 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-operator-scripts\") pod \"cinder-391e-account-create-update-r55gs\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.090247 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqdkg\" (UniqueName: \"kubernetes.io/projected/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-kube-api-access-rqdkg\") pod \"cinder-db-create-6lfkb\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.090354 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-operator-scripts\") pod \"cinder-db-create-6lfkb\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.090386 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjq6q\" (UniqueName: \"kubernetes.io/projected/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-kube-api-access-gjq6q\") pod \"cinder-391e-account-create-update-r55gs\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.091271 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-operator-scripts\") pod \"cinder-db-create-6lfkb\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.106587 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqdkg\" (UniqueName: \"kubernetes.io/projected/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-kube-api-access-rqdkg\") pod \"cinder-db-create-6lfkb\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.141251 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-5zwrb"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.144119 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.152086 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.153631 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-ffb8-account-create-update-2lngf"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.154630 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.169232 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.181706 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5zwrb"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.189685 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ffb8-account-create-update-2lngf"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.191844 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjq6q\" (UniqueName: \"kubernetes.io/projected/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-kube-api-access-gjq6q\") pod \"cinder-391e-account-create-update-r55gs\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.191898 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-operator-scripts\") pod \"cinder-391e-account-create-update-r55gs\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.192778 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-operator-scripts\") pod \"cinder-391e-account-create-update-r55gs\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.243015 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjq6q\" (UniqueName: \"kubernetes.io/projected/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-kube-api-access-gjq6q\") pod \"cinder-391e-account-create-update-r55gs\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.244758 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.281297 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bbqrz"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.282472 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.293542 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bbqrz"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.295763 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-operator-scripts\") pod \"barbican-db-create-5zwrb\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.295922 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29n5s\" (UniqueName: \"kubernetes.io/projected/f802e166-b89b-4e38-9230-762edc86b32c-kube-api-access-29n5s\") pod \"barbican-ffb8-account-create-update-2lngf\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.296339 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89clm\" (UniqueName: \"kubernetes.io/projected/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-kube-api-access-89clm\") pod \"barbican-db-create-5zwrb\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.296841 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f802e166-b89b-4e38-9230-762edc86b32c-operator-scripts\") pod \"barbican-ffb8-account-create-update-2lngf\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.366035 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ttgww"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.367250 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.371165 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.371498 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.371784 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ddgj4" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.372069 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.383330 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ttgww"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.402157 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsvbv\" (UniqueName: \"kubernetes.io/projected/5e0a3027-2e26-4258-aaee-a5f0df76fe34-kube-api-access-gsvbv\") pod \"neutron-db-create-bbqrz\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.402371 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29n5s\" (UniqueName: \"kubernetes.io/projected/f802e166-b89b-4e38-9230-762edc86b32c-kube-api-access-29n5s\") pod \"barbican-ffb8-account-create-update-2lngf\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.402466 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89clm\" (UniqueName: \"kubernetes.io/projected/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-kube-api-access-89clm\") pod \"barbican-db-create-5zwrb\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.402604 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f802e166-b89b-4e38-9230-762edc86b32c-operator-scripts\") pod \"barbican-ffb8-account-create-update-2lngf\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.402703 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e0a3027-2e26-4258-aaee-a5f0df76fe34-operator-scripts\") pod \"neutron-db-create-bbqrz\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.402846 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-operator-scripts\") pod \"barbican-db-create-5zwrb\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.404004 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-operator-scripts\") pod \"barbican-db-create-5zwrb\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.404538 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f802e166-b89b-4e38-9230-762edc86b32c-operator-scripts\") pod \"barbican-ffb8-account-create-update-2lngf\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.428236 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89clm\" (UniqueName: \"kubernetes.io/projected/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-kube-api-access-89clm\") pod \"barbican-db-create-5zwrb\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.435233 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29n5s\" (UniqueName: \"kubernetes.io/projected/f802e166-b89b-4e38-9230-762edc86b32c-kube-api-access-29n5s\") pod \"barbican-ffb8-account-create-update-2lngf\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.456703 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4de6-account-create-update-vzw5r"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.457863 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.461234 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.470668 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4de6-account-create-update-vzw5r"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.476455 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.491030 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.515390 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsvbv\" (UniqueName: \"kubernetes.io/projected/5e0a3027-2e26-4258-aaee-a5f0df76fe34-kube-api-access-gsvbv\") pod \"neutron-db-create-bbqrz\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.515734 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv7lc\" (UniqueName: \"kubernetes.io/projected/c949a736-b46d-4907-a24d-17f28f4e3f71-kube-api-access-mv7lc\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.515867 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e0a3027-2e26-4258-aaee-a5f0df76fe34-operator-scripts\") pod \"neutron-db-create-bbqrz\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.516582 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-config-data\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.520027 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-combined-ca-bundle\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.516540 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e0a3027-2e26-4258-aaee-a5f0df76fe34-operator-scripts\") pod \"neutron-db-create-bbqrz\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.558781 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsvbv\" (UniqueName: \"kubernetes.io/projected/5e0a3027-2e26-4258-aaee-a5f0df76fe34-kube-api-access-gsvbv\") pod \"neutron-db-create-bbqrz\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.599441 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.621647 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv7lc\" (UniqueName: \"kubernetes.io/projected/c949a736-b46d-4907-a24d-17f28f4e3f71-kube-api-access-mv7lc\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.622041 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-config-data\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.622172 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-combined-ca-bundle\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.622282 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c99pf\" (UniqueName: \"kubernetes.io/projected/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-kube-api-access-c99pf\") pod \"neutron-4de6-account-create-update-vzw5r\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.622382 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-operator-scripts\") pod \"neutron-4de6-account-create-update-vzw5r\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.626384 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-config-data\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.627279 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-combined-ca-bundle\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.642913 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv7lc\" (UniqueName: \"kubernetes.io/projected/c949a736-b46d-4907-a24d-17f28f4e3f71-kube-api-access-mv7lc\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.684965 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.724499 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c99pf\" (UniqueName: \"kubernetes.io/projected/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-kube-api-access-c99pf\") pod \"neutron-4de6-account-create-update-vzw5r\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.724548 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-operator-scripts\") pod \"neutron-4de6-account-create-update-vzw5r\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.725281 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-operator-scripts\") pod \"neutron-4de6-account-create-update-vzw5r\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.754202 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c99pf\" (UniqueName: \"kubernetes.io/projected/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-kube-api-access-c99pf\") pod \"neutron-4de6-account-create-update-vzw5r\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.815510 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.952115 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="099e300a-0fba-4964-9d0b-34124522c8f3" path="/var/lib/kubelet/pods/099e300a-0fba-4964-9d0b-34124522c8f3/volumes" Jan 24 07:11:38 crc kubenswrapper[4675]: I0124 07:11:38.629481 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:11:38 crc kubenswrapper[4675]: I0124 07:11:38.629905 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:11:38 crc kubenswrapper[4675]: I0124 07:11:38.892947 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5zwrb"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.177827 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6lfkb"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.279681 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ttgww"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.302591 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ffb8-account-create-update-2lngf"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.394518 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4de6-account-create-update-vzw5r"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.600863 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-391e-account-create-update-r55gs"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.607076 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4de6-account-create-update-vzw5r" event={"ID":"ba347982-6836-4e3f-80c3-ef28ffc5e5cc","Type":"ContainerStarted","Data":"30fa7852dd2cd0c46b6287d9bd9331a1f3b6baa20f6a90a77efc7b725b86fc49"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.613448 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"0617920e754350b9c06bb7df18005b6b1daa3ce8d49d79b6c375ffc8459c806a"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.613567 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"2cd35c0f19aeeb4bab8e72603ad63dd854416651c80bd39ad835cc64dce64d41"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.615280 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bbqrz"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.622206 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ffb8-account-create-update-2lngf" event={"ID":"f802e166-b89b-4e38-9230-762edc86b32c","Type":"ContainerStarted","Data":"6e049d65046fc8bd2d246cb907d0d0be4b249ca0224a8649a7d6f2866dfa2350"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.623619 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ttgww" event={"ID":"c949a736-b46d-4907-a24d-17f28f4e3f71","Type":"ContainerStarted","Data":"243cc2bd6dd3a09154104a9083678353839a94d8467ae1e1de86d7b6bc695da9"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.624985 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6lfkb" event={"ID":"cce44ec9-1ffb-44d7-bcce-250a1fdf6959","Type":"ContainerStarted","Data":"bdb786fea2e5d2877731346b3f673262878acb6d16f62d6d292e1a2d801ca4e0"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.625009 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6lfkb" event={"ID":"cce44ec9-1ffb-44d7-bcce-250a1fdf6959","Type":"ContainerStarted","Data":"e778afde4a601de36f6f4d3877893264b0f3b9192d53f14716fa1a1c8bbb75f7"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.627470 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5zwrb" event={"ID":"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9","Type":"ContainerStarted","Data":"03ea54e271ba027af9b3efb600eaf2f980bfc8dab89a53460b77cbbc8373517e"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.627524 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5zwrb" event={"ID":"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9","Type":"ContainerStarted","Data":"b8088589a7b47ccaff1366db2f315b596852c98dffbcff028ab603c645ee271e"} Jan 24 07:11:39 crc kubenswrapper[4675]: W0124 07:11:39.640056 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab6d6162_9f1a_409f_a1aa_87a14a15bf7f.slice/crio-0e8ab46931e83d3eb103016d6cec4e2c139bcceae4b9b4b0fc1d86cd7589c736 WatchSource:0}: Error finding container 0e8ab46931e83d3eb103016d6cec4e2c139bcceae4b9b4b0fc1d86cd7589c736: Status 404 returned error can't find the container with id 0e8ab46931e83d3eb103016d6cec4e2c139bcceae4b9b4b0fc1d86cd7589c736 Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.652691 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-6lfkb" podStartSLOduration=4.652661248 podStartE2EDuration="4.652661248s" podCreationTimestamp="2026-01-24 07:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:39.639647255 +0000 UTC m=+1100.935752478" watchObservedRunningTime="2026-01-24 07:11:39.652661248 +0000 UTC m=+1100.948766471" Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.674083 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-5zwrb" podStartSLOduration=3.674058783 podStartE2EDuration="3.674058783s" podCreationTimestamp="2026-01-24 07:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:39.656508741 +0000 UTC m=+1100.952613964" watchObservedRunningTime="2026-01-24 07:11:39.674058783 +0000 UTC m=+1100.970164006" Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.641989 4675 generic.go:334] "Generic (PLEG): container finished" podID="8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" containerID="03ea54e271ba027af9b3efb600eaf2f980bfc8dab89a53460b77cbbc8373517e" exitCode=0 Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.642678 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5zwrb" event={"ID":"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9","Type":"ContainerDied","Data":"03ea54e271ba027af9b3efb600eaf2f980bfc8dab89a53460b77cbbc8373517e"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.649527 4675 generic.go:334] "Generic (PLEG): container finished" podID="ba347982-6836-4e3f-80c3-ef28ffc5e5cc" containerID="6d211dc6ddf9ea6d7e3e8e95b729de63c53d51d2eead6595b62cad41e16dadc4" exitCode=0 Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.649609 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4de6-account-create-update-vzw5r" event={"ID":"ba347982-6836-4e3f-80c3-ef28ffc5e5cc","Type":"ContainerDied","Data":"6d211dc6ddf9ea6d7e3e8e95b729de63c53d51d2eead6595b62cad41e16dadc4"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.652006 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"4127ff24f622d04e4b50ebcd517a9334968f2e41d1a01ae243cb11e8f67f78d4"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.652037 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"a8ef207c46b04e2a928306f9a3920e47acdf3c708baed032d4da30c34a4fd7d7"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.653834 4675 generic.go:334] "Generic (PLEG): container finished" podID="ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" containerID="face7e5c0b8054d6c99e86c42a7c3b558ca54c06b16b7b249ea8d2239d88036b" exitCode=0 Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.653957 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-391e-account-create-update-r55gs" event={"ID":"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f","Type":"ContainerDied","Data":"face7e5c0b8054d6c99e86c42a7c3b558ca54c06b16b7b249ea8d2239d88036b"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.653984 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-391e-account-create-update-r55gs" event={"ID":"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f","Type":"ContainerStarted","Data":"0e8ab46931e83d3eb103016d6cec4e2c139bcceae4b9b4b0fc1d86cd7589c736"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.660643 4675 generic.go:334] "Generic (PLEG): container finished" podID="5e0a3027-2e26-4258-aaee-a5f0df76fe34" containerID="c2b0d0fa45b902eb0ffa086ad50d248f34796e32c1a20209565126bead4f77e0" exitCode=0 Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.660761 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bbqrz" event={"ID":"5e0a3027-2e26-4258-aaee-a5f0df76fe34","Type":"ContainerDied","Data":"c2b0d0fa45b902eb0ffa086ad50d248f34796e32c1a20209565126bead4f77e0"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.660791 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bbqrz" event={"ID":"5e0a3027-2e26-4258-aaee-a5f0df76fe34","Type":"ContainerStarted","Data":"a410f9cc0f282aeac5c407ee365c1406247090a0b1ca3f5d68c0ef0fc41a5d8f"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.662690 4675 generic.go:334] "Generic (PLEG): container finished" podID="f802e166-b89b-4e38-9230-762edc86b32c" containerID="2617af6172b0f231078c0676a80fde395fe2ef1163c9fa0791bb89294c2f806c" exitCode=0 Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.662773 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ffb8-account-create-update-2lngf" event={"ID":"f802e166-b89b-4e38-9230-762edc86b32c","Type":"ContainerDied","Data":"2617af6172b0f231078c0676a80fde395fe2ef1163c9fa0791bb89294c2f806c"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.664382 4675 generic.go:334] "Generic (PLEG): container finished" podID="cce44ec9-1ffb-44d7-bcce-250a1fdf6959" containerID="bdb786fea2e5d2877731346b3f673262878acb6d16f62d6d292e1a2d801ca4e0" exitCode=0 Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.664439 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6lfkb" event={"ID":"cce44ec9-1ffb-44d7-bcce-250a1fdf6959","Type":"ContainerDied","Data":"bdb786fea2e5d2877731346b3f673262878acb6d16f62d6d292e1a2d801ca4e0"} Jan 24 07:11:44 crc kubenswrapper[4675]: E0124 07:11:44.285368 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.338354 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.357201 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.392827 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.422207 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.436483 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-operator-scripts\") pod \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.436523 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e0a3027-2e26-4258-aaee-a5f0df76fe34-operator-scripts\") pod \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.436579 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsvbv\" (UniqueName: \"kubernetes.io/projected/5e0a3027-2e26-4258-aaee-a5f0df76fe34-kube-api-access-gsvbv\") pod \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.436782 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89clm\" (UniqueName: \"kubernetes.io/projected/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-kube-api-access-89clm\") pod \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.437447 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0a3027-2e26-4258-aaee-a5f0df76fe34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e0a3027-2e26-4258-aaee-a5f0df76fe34" (UID: "5e0a3027-2e26-4258-aaee-a5f0df76fe34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.437855 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" (UID: "8e546ec4-3ea8-4140-9238-8d5cdd09e4e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.438075 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.440908 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-kube-api-access-89clm" (OuterVolumeSpecName: "kube-api-access-89clm") pod "8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" (UID: "8e546ec4-3ea8-4140-9238-8d5cdd09e4e9"). InnerVolumeSpecName "kube-api-access-89clm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.443519 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0a3027-2e26-4258-aaee-a5f0df76fe34-kube-api-access-gsvbv" (OuterVolumeSpecName: "kube-api-access-gsvbv") pod "5e0a3027-2e26-4258-aaee-a5f0df76fe34" (UID: "5e0a3027-2e26-4258-aaee-a5f0df76fe34"). InnerVolumeSpecName "kube-api-access-gsvbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.454077 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542397 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c99pf\" (UniqueName: \"kubernetes.io/projected/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-kube-api-access-c99pf\") pod \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542568 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f802e166-b89b-4e38-9230-762edc86b32c-operator-scripts\") pod \"f802e166-b89b-4e38-9230-762edc86b32c\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542600 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-operator-scripts\") pod \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542656 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjq6q\" (UniqueName: \"kubernetes.io/projected/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-kube-api-access-gjq6q\") pod \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542682 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-operator-scripts\") pod \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542724 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqdkg\" (UniqueName: \"kubernetes.io/projected/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-kube-api-access-rqdkg\") pod \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542772 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29n5s\" (UniqueName: \"kubernetes.io/projected/f802e166-b89b-4e38-9230-762edc86b32c-kube-api-access-29n5s\") pod \"f802e166-b89b-4e38-9230-762edc86b32c\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542872 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-operator-scripts\") pod \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.543209 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89clm\" (UniqueName: \"kubernetes.io/projected/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-kube-api-access-89clm\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.543220 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.543230 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e0a3027-2e26-4258-aaee-a5f0df76fe34-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.543239 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsvbv\" (UniqueName: \"kubernetes.io/projected/5e0a3027-2e26-4258-aaee-a5f0df76fe34-kube-api-access-gsvbv\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.543939 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" (UID: "ab6d6162-9f1a-409f-a1aa-87a14a15bf7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.546303 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cce44ec9-1ffb-44d7-bcce-250a1fdf6959" (UID: "cce44ec9-1ffb-44d7-bcce-250a1fdf6959"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.546397 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba347982-6836-4e3f-80c3-ef28ffc5e5cc" (UID: "ba347982-6836-4e3f-80c3-ef28ffc5e5cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.546555 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f802e166-b89b-4e38-9230-762edc86b32c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f802e166-b89b-4e38-9230-762edc86b32c" (UID: "f802e166-b89b-4e38-9230-762edc86b32c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.547915 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-kube-api-access-c99pf" (OuterVolumeSpecName: "kube-api-access-c99pf") pod "ba347982-6836-4e3f-80c3-ef28ffc5e5cc" (UID: "ba347982-6836-4e3f-80c3-ef28ffc5e5cc"). InnerVolumeSpecName "kube-api-access-c99pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.550158 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f802e166-b89b-4e38-9230-762edc86b32c-kube-api-access-29n5s" (OuterVolumeSpecName: "kube-api-access-29n5s") pod "f802e166-b89b-4e38-9230-762edc86b32c" (UID: "f802e166-b89b-4e38-9230-762edc86b32c"). InnerVolumeSpecName "kube-api-access-29n5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.550839 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-kube-api-access-rqdkg" (OuterVolumeSpecName: "kube-api-access-rqdkg") pod "cce44ec9-1ffb-44d7-bcce-250a1fdf6959" (UID: "cce44ec9-1ffb-44d7-bcce-250a1fdf6959"). InnerVolumeSpecName "kube-api-access-rqdkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.551524 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-kube-api-access-gjq6q" (OuterVolumeSpecName: "kube-api-access-gjq6q") pod "ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" (UID: "ab6d6162-9f1a-409f-a1aa-87a14a15bf7f"). InnerVolumeSpecName "kube-api-access-gjq6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645122 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f802e166-b89b-4e38-9230-762edc86b32c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645155 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645165 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjq6q\" (UniqueName: \"kubernetes.io/projected/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-kube-api-access-gjq6q\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645178 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645192 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqdkg\" (UniqueName: \"kubernetes.io/projected/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-kube-api-access-rqdkg\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645201 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29n5s\" (UniqueName: \"kubernetes.io/projected/f802e166-b89b-4e38-9230-762edc86b32c-kube-api-access-29n5s\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645209 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645219 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c99pf\" (UniqueName: \"kubernetes.io/projected/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-kube-api-access-c99pf\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.710562 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"204d783f8319845444798286b072a4f66b8ac0266261a709e1f6e920cd514933"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.710604 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"56ba9ddfd5672aac91cc5eac3ce1a99e58bc3e1c3f1089afed4ffbc33bc9ffa6"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.711666 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-391e-account-create-update-r55gs" event={"ID":"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f","Type":"ContainerDied","Data":"0e8ab46931e83d3eb103016d6cec4e2c139bcceae4b9b4b0fc1d86cd7589c736"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.711688 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e8ab46931e83d3eb103016d6cec4e2c139bcceae4b9b4b0fc1d86cd7589c736" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.711763 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.715413 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.715830 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bbqrz" event={"ID":"5e0a3027-2e26-4258-aaee-a5f0df76fe34","Type":"ContainerDied","Data":"a410f9cc0f282aeac5c407ee365c1406247090a0b1ca3f5d68c0ef0fc41a5d8f"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.715885 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a410f9cc0f282aeac5c407ee365c1406247090a0b1ca3f5d68c0ef0fc41a5d8f" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.718793 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ffb8-account-create-update-2lngf" event={"ID":"f802e166-b89b-4e38-9230-762edc86b32c","Type":"ContainerDied","Data":"6e049d65046fc8bd2d246cb907d0d0be4b249ca0224a8649a7d6f2866dfa2350"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.718832 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e049d65046fc8bd2d246cb907d0d0be4b249ca0224a8649a7d6f2866dfa2350" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.718881 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.722468 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ttgww" event={"ID":"c949a736-b46d-4907-a24d-17f28f4e3f71","Type":"ContainerStarted","Data":"1cf02099876733db0045ce49593ffbde19db42e4c0d54b5221192666290a2ec9"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.726152 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6lfkb" event={"ID":"cce44ec9-1ffb-44d7-bcce-250a1fdf6959","Type":"ContainerDied","Data":"e778afde4a601de36f6f4d3877893264b0f3b9192d53f14716fa1a1c8bbb75f7"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.726188 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e778afde4a601de36f6f4d3877893264b0f3b9192d53f14716fa1a1c8bbb75f7" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.726598 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.747492 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5zwrb" event={"ID":"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9","Type":"ContainerDied","Data":"b8088589a7b47ccaff1366db2f315b596852c98dffbcff028ab603c645ee271e"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.747530 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8088589a7b47ccaff1366db2f315b596852c98dffbcff028ab603c645ee271e" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.747606 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.756641 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4de6-account-create-update-vzw5r" event={"ID":"ba347982-6836-4e3f-80c3-ef28ffc5e5cc","Type":"ContainerDied","Data":"30fa7852dd2cd0c46b6287d9bd9331a1f3b6baa20f6a90a77efc7b725b86fc49"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.756678 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30fa7852dd2cd0c46b6287d9bd9331a1f3b6baa20f6a90a77efc7b725b86fc49" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.756741 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.761525 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ttgww" podStartSLOduration=3.919172845 podStartE2EDuration="9.761504834s" podCreationTimestamp="2026-01-24 07:11:36 +0000 UTC" firstStartedPulling="2026-01-24 07:11:39.313627259 +0000 UTC m=+1100.609732482" lastFinishedPulling="2026-01-24 07:11:45.155959248 +0000 UTC m=+1106.452064471" observedRunningTime="2026-01-24 07:11:45.756320759 +0000 UTC m=+1107.052425982" watchObservedRunningTime="2026-01-24 07:11:45.761504834 +0000 UTC m=+1107.057610057" Jan 24 07:11:46 crc kubenswrapper[4675]: I0124 07:11:46.764024 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-95xkb" event={"ID":"7e53c5a1-6293-46d9-9783-e7d183050152","Type":"ContainerStarted","Data":"f2b78394bf1beb82b28dc55cf3863a1ec788f53b7447575aebefd50f08d7bb67"} Jan 24 07:11:46 crc kubenswrapper[4675]: I0124 07:11:46.769057 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"075d9b1bd932aaed6d2af5cd675c905700a22594707c0714a8050c0c213eb0b4"} Jan 24 07:11:46 crc kubenswrapper[4675]: I0124 07:11:46.769117 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"4956529c8ae5702877fc98004b8fb7c945d4c35d72fe1221fa277be750eddbe3"} Jan 24 07:11:46 crc kubenswrapper[4675]: I0124 07:11:46.790738 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-95xkb" podStartSLOduration=2.90086547 podStartE2EDuration="33.790703267s" podCreationTimestamp="2026-01-24 07:11:13 +0000 UTC" firstStartedPulling="2026-01-24 07:11:14.611897491 +0000 UTC m=+1075.908002714" lastFinishedPulling="2026-01-24 07:11:45.501735288 +0000 UTC m=+1106.797840511" observedRunningTime="2026-01-24 07:11:46.783087243 +0000 UTC m=+1108.079192476" watchObservedRunningTime="2026-01-24 07:11:46.790703267 +0000 UTC m=+1108.086808500" Jan 24 07:11:48 crc kubenswrapper[4675]: I0124 07:11:48.792507 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"809153b31ad5c24469cfc83a2aee0404877249b17bc1fb90cd7fb5879cead740"} Jan 24 07:11:48 crc kubenswrapper[4675]: I0124 07:11:48.793452 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"efc12e4d15de27c184e04f72877609d9a106c023223187f2182842c8a9d1f7cc"} Jan 24 07:11:49 crc kubenswrapper[4675]: I0124 07:11:49.807115 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"0f6da2a4e817a348c9f135758a18b9cec7894c8aea285f51eff95bfef121fd55"} Jan 24 07:11:49 crc kubenswrapper[4675]: I0124 07:11:49.807582 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"b944801ea62a5cffc238e63a890a526a10226ed7eec176ad51adf44037d115fb"} Jan 24 07:11:49 crc kubenswrapper[4675]: I0124 07:11:49.807595 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"1ead6c386fb66aa4edcfdf05a198cfb2bf5a347be6cf3a28e90ac538a560303d"} Jan 24 07:11:49 crc kubenswrapper[4675]: I0124 07:11:49.807603 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"6b355ae3f48a06f8ca33b1dd301f098b42c925f43ddc11eef607461cb1d33fd6"} Jan 24 07:11:49 crc kubenswrapper[4675]: I0124 07:11:49.807611 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"8366e0c49d57fd2d3ad9edb8b10fa14b6769efd8be1fec3acc0b0d7dcd1cc80f"} Jan 24 07:11:49 crc kubenswrapper[4675]: I0124 07:11:49.848175 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=35.814040431 podStartE2EDuration="49.848159461s" podCreationTimestamp="2026-01-24 07:11:00 +0000 UTC" firstStartedPulling="2026-01-24 07:11:34.297955904 +0000 UTC m=+1095.594061137" lastFinishedPulling="2026-01-24 07:11:48.332074944 +0000 UTC m=+1109.628180167" observedRunningTime="2026-01-24 07:11:49.841638533 +0000 UTC m=+1111.137743766" watchObservedRunningTime="2026-01-24 07:11:49.848159461 +0000 UTC m=+1111.144264684" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197159 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-njlcw"] Jan 24 07:11:50 crc kubenswrapper[4675]: E0124 07:11:50.197455 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba347982-6836-4e3f-80c3-ef28ffc5e5cc" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197471 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba347982-6836-4e3f-80c3-ef28ffc5e5cc" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: E0124 07:11:50.197482 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f802e166-b89b-4e38-9230-762edc86b32c" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197488 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f802e166-b89b-4e38-9230-762edc86b32c" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: E0124 07:11:50.197500 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0a3027-2e26-4258-aaee-a5f0df76fe34" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197506 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0a3027-2e26-4258-aaee-a5f0df76fe34" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: E0124 07:11:50.197520 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197526 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: E0124 07:11:50.197543 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce44ec9-1ffb-44d7-bcce-250a1fdf6959" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197549 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce44ec9-1ffb-44d7-bcce-250a1fdf6959" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: E0124 07:11:50.197563 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197569 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197699 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f802e166-b89b-4e38-9230-762edc86b32c" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197712 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0a3027-2e26-4258-aaee-a5f0df76fe34" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197737 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce44ec9-1ffb-44d7-bcce-250a1fdf6959" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197748 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba347982-6836-4e3f-80c3-ef28ffc5e5cc" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197758 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197767 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.198475 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.200589 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.222359 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-njlcw"] Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.358406 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.358453 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.358574 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtjqz\" (UniqueName: \"kubernetes.io/projected/6391092e-27b4-4604-8a19-416c5073b6bf-kube-api-access-vtjqz\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.358641 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.358682 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-config\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.358754 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.460087 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtjqz\" (UniqueName: \"kubernetes.io/projected/6391092e-27b4-4604-8a19-416c5073b6bf-kube-api-access-vtjqz\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.460153 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.460207 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-config\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.460229 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.460262 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.460282 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.462614 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.462695 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.462758 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.462998 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-config\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.463469 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.491177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtjqz\" (UniqueName: \"kubernetes.io/projected/6391092e-27b4-4604-8a19-416c5073b6bf-kube-api-access-vtjqz\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.512083 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.954597 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-njlcw"] Jan 24 07:11:51 crc kubenswrapper[4675]: I0124 07:11:51.824830 4675 generic.go:334] "Generic (PLEG): container finished" podID="c949a736-b46d-4907-a24d-17f28f4e3f71" containerID="1cf02099876733db0045ce49593ffbde19db42e4c0d54b5221192666290a2ec9" exitCode=0 Jan 24 07:11:51 crc kubenswrapper[4675]: I0124 07:11:51.824894 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ttgww" event={"ID":"c949a736-b46d-4907-a24d-17f28f4e3f71","Type":"ContainerDied","Data":"1cf02099876733db0045ce49593ffbde19db42e4c0d54b5221192666290a2ec9"} Jan 24 07:11:51 crc kubenswrapper[4675]: I0124 07:11:51.827337 4675 generic.go:334] "Generic (PLEG): container finished" podID="6391092e-27b4-4604-8a19-416c5073b6bf" containerID="495b578ee6202ae9668863232052b610d816ffa68e81d10913cfb8812139ec2f" exitCode=0 Jan 24 07:11:51 crc kubenswrapper[4675]: I0124 07:11:51.827381 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" event={"ID":"6391092e-27b4-4604-8a19-416c5073b6bf","Type":"ContainerDied","Data":"495b578ee6202ae9668863232052b610d816ffa68e81d10913cfb8812139ec2f"} Jan 24 07:11:51 crc kubenswrapper[4675]: I0124 07:11:51.827425 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" event={"ID":"6391092e-27b4-4604-8a19-416c5073b6bf","Type":"ContainerStarted","Data":"0ab1f402eb6037ac10eccabab78ec60fe868a1b4dcab74a8478319dd857695e3"} Jan 24 07:11:52 crc kubenswrapper[4675]: I0124 07:11:52.835664 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" event={"ID":"6391092e-27b4-4604-8a19-416c5073b6bf","Type":"ContainerStarted","Data":"0b3c07f23548cc340802bda7f8f284bbe8cb1d557506edd0e2bb72db39db55f1"} Jan 24 07:11:52 crc kubenswrapper[4675]: I0124 07:11:52.837392 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:52 crc kubenswrapper[4675]: I0124 07:11:52.858228 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" podStartSLOduration=2.85820544 podStartE2EDuration="2.85820544s" podCreationTimestamp="2026-01-24 07:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:52.851350855 +0000 UTC m=+1114.147456068" watchObservedRunningTime="2026-01-24 07:11:52.85820544 +0000 UTC m=+1114.154310673" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.146252 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.310641 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-config-data\") pod \"c949a736-b46d-4907-a24d-17f28f4e3f71\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.310698 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-combined-ca-bundle\") pod \"c949a736-b46d-4907-a24d-17f28f4e3f71\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.310834 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv7lc\" (UniqueName: \"kubernetes.io/projected/c949a736-b46d-4907-a24d-17f28f4e3f71-kube-api-access-mv7lc\") pod \"c949a736-b46d-4907-a24d-17f28f4e3f71\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.317002 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c949a736-b46d-4907-a24d-17f28f4e3f71-kube-api-access-mv7lc" (OuterVolumeSpecName: "kube-api-access-mv7lc") pod "c949a736-b46d-4907-a24d-17f28f4e3f71" (UID: "c949a736-b46d-4907-a24d-17f28f4e3f71"). InnerVolumeSpecName "kube-api-access-mv7lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.346896 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-config-data" (OuterVolumeSpecName: "config-data") pod "c949a736-b46d-4907-a24d-17f28f4e3f71" (UID: "c949a736-b46d-4907-a24d-17f28f4e3f71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.348139 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c949a736-b46d-4907-a24d-17f28f4e3f71" (UID: "c949a736-b46d-4907-a24d-17f28f4e3f71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.412518 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv7lc\" (UniqueName: \"kubernetes.io/projected/c949a736-b46d-4907-a24d-17f28f4e3f71-kube-api-access-mv7lc\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.412558 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.412567 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.848030 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.848653 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ttgww" event={"ID":"c949a736-b46d-4907-a24d-17f28f4e3f71","Type":"ContainerDied","Data":"243cc2bd6dd3a09154104a9083678353839a94d8467ae1e1de86d7b6bc695da9"} Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.848688 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="243cc2bd6dd3a09154104a9083678353839a94d8467ae1e1de86d7b6bc695da9" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.148267 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xr9fm"] Jan 24 07:11:54 crc kubenswrapper[4675]: E0124 07:11:54.148885 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c949a736-b46d-4907-a24d-17f28f4e3f71" containerName="keystone-db-sync" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.148903 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c949a736-b46d-4907-a24d-17f28f4e3f71" containerName="keystone-db-sync" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.149231 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c949a736-b46d-4907-a24d-17f28f4e3f71" containerName="keystone-db-sync" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.149942 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.164151 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.164366 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ddgj4" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.164391 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.165026 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.183608 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.194480 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-njlcw"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.220706 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xr9fm"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.226980 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-config-data\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.227021 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bms97\" (UniqueName: \"kubernetes.io/projected/7854ef09-5060-4534-96e2-2963cddcc691-kube-api-access-bms97\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.227077 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-credential-keys\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.227109 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-fernet-keys\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.227149 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-scripts\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.227176 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-combined-ca-bundle\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.247940 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-z9bxc"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.249494 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.307765 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-z9bxc"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340576 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-svc\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340621 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-fernet-keys\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340667 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-scripts\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340689 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqh4l\" (UniqueName: \"kubernetes.io/projected/6602e4dc-7422-48ec-9a0f-919faff36b4e-kube-api-access-zqh4l\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340707 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-combined-ca-bundle\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340770 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-config-data\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340789 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bms97\" (UniqueName: \"kubernetes.io/projected/7854ef09-5060-4534-96e2-2963cddcc691-kube-api-access-bms97\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340822 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-config\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340847 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340881 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340900 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-credential-keys\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340941 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.363878 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-config-data\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.373399 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-scripts\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.376538 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bms97\" (UniqueName: \"kubernetes.io/projected/7854ef09-5060-4534-96e2-2963cddcc691-kube-api-access-bms97\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.383556 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-fernet-keys\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.404278 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-credential-keys\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.425909 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-combined-ca-bundle\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.446561 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.446609 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.446633 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-svc\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.446676 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqh4l\" (UniqueName: \"kubernetes.io/projected/6602e4dc-7422-48ec-9a0f-919faff36b4e-kube-api-access-zqh4l\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.446752 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-config\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.446774 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.447518 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.448029 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.448555 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.448830 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-config\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.449348 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-svc\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.460571 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4hsxg"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.461967 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.467639 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.476850 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.477060 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r2l2l" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.477269 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.535540 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqh4l\" (UniqueName: \"kubernetes.io/projected/6602e4dc-7422-48ec-9a0f-919faff36b4e-kube-api-access-zqh4l\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.547645 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-config\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.547680 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-combined-ca-bundle\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.548190 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd8c6\" (UniqueName: \"kubernetes.io/projected/871f5758-f078-4271-acb9-e5ca8bfdc2eb-kube-api-access-xd8c6\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.553816 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4hsxg"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.592999 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-fp9qw"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.594437 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.601876 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.602077 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.602246 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.615204 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-fp9qw"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.640793 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tvkgt" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.653838 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd8c6\" (UniqueName: \"kubernetes.io/projected/871f5758-f078-4271-acb9-e5ca8bfdc2eb-kube-api-access-xd8c6\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.654062 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-config\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.654163 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-combined-ca-bundle\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.705685 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd8c6\" (UniqueName: \"kubernetes.io/projected/871f5758-f078-4271-acb9-e5ca8bfdc2eb-kube-api-access-xd8c6\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.706627 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-combined-ca-bundle\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.725587 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-config\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.744312 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d69f445c7-kqzw8"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.746212 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.748535 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.748859 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.752206 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.754976 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-mvflk" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.756138 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-combined-ca-bundle\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.756184 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-scripts\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.756250 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjt6\" (UniqueName: \"kubernetes.io/projected/f54df341-915c-4505-bd2e-81923b07a2be-kube-api-access-jbjt6\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.756277 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54df341-915c-4505-bd2e-81923b07a2be-logs\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.756321 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-config-data\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.784954 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-58bxq"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.786077 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.798214 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gdfs9" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.798419 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.798534 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.801914 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d69f445c7-kqzw8"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.812198 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.822201 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.838257 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.838435 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.843866 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-z9bxc"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.855790 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858604 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-combined-ca-bundle\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858642 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-config-data\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858664 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-scripts\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858680 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-etc-machine-id\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858732 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-scripts\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858749 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4464d27-9360-4f78-92cd-3b9d11204ec2-horizon-secret-key\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858774 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps82j\" (UniqueName: \"kubernetes.io/projected/c4464d27-9360-4f78-92cd-3b9d11204ec2-kube-api-access-ps82j\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858789 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-combined-ca-bundle\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858807 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjt6\" (UniqueName: \"kubernetes.io/projected/f54df341-915c-4505-bd2e-81923b07a2be-kube-api-access-jbjt6\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858824 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54df341-915c-4505-bd2e-81923b07a2be-logs\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858848 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4464d27-9360-4f78-92cd-3b9d11204ec2-logs\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858873 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-config-data\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858892 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-db-sync-config-data\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858918 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-scripts\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858935 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-config-data\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858980 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psx25\" (UniqueName: \"kubernetes.io/projected/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-kube-api-access-psx25\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.862040 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54df341-915c-4505-bd2e-81923b07a2be-logs\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.871696 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-combined-ca-bundle\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.880916 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.895108 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-scripts\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.902211 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-config-data\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.959852 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-scripts\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960137 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4464d27-9360-4f78-92cd-3b9d11204ec2-horizon-secret-key\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960230 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps82j\" (UniqueName: \"kubernetes.io/projected/c4464d27-9360-4f78-92cd-3b9d11204ec2-kube-api-access-ps82j\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960311 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-combined-ca-bundle\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960391 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-run-httpd\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960459 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960547 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4464d27-9360-4f78-92cd-3b9d11204ec2-logs\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960640 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-db-sync-config-data\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960746 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-log-httpd\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960815 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-scripts\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960886 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-config-data\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960976 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-config-data\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961103 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961199 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psx25\" (UniqueName: \"kubernetes.io/projected/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-kube-api-access-psx25\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961264 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-scripts\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961337 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb8hn\" (UniqueName: \"kubernetes.io/projected/62b7e06f-b840-408c-b026-a086b975812f-kube-api-access-pb8hn\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961403 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-config-data\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961481 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-etc-machine-id\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961602 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-etc-machine-id\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.963319 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-scripts\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.993036 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4464d27-9360-4f78-92cd-3b9d11204ec2-logs\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.994108 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-db-sync-config-data\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.994183 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-combined-ca-bundle\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.994458 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-config-data\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.995780 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-td45s"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.996214 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjt6\" (UniqueName: \"kubernetes.io/projected/f54df341-915c-4505-bd2e-81923b07a2be-kube-api-access-jbjt6\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:54.996930 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.002898 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-58bxq"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.003911 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-config-data\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.004216 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:55 crc kubenswrapper[4675]: E0124 07:11:55.023684 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb\": RecentStats: unable to find data in memory cache]" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.028822 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-scripts\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.037598 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4464d27-9360-4f78-92cd-3b9d11204ec2-horizon-secret-key\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.086177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps82j\" (UniqueName: \"kubernetes.io/projected/c4464d27-9360-4f78-92cd-3b9d11204ec2-kube-api-access-ps82j\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.087066 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psx25\" (UniqueName: \"kubernetes.io/projected/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-kube-api-access-psx25\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093387 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-svc\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093434 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-log-httpd\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093460 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-config-data\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093574 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093624 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093657 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093731 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-scripts\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093784 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb8hn\" (UniqueName: \"kubernetes.io/projected/62b7e06f-b840-408c-b026-a086b975812f-kube-api-access-pb8hn\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093832 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-config\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093895 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfxkp\" (UniqueName: \"kubernetes.io/projected/1e547740-a536-4d48-96a0-d22ca8bca63f-kube-api-access-jfxkp\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093952 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093991 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-run-httpd\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.094025 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.094968 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-log-httpd\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.099232 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-run-httpd\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.104057 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.113421 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-scripts\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.131146 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.163828 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-config-data\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.168609 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.178086 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.183175 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-td45s"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.203918 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfxkp\" (UniqueName: \"kubernetes.io/projected/1e547740-a536-4d48-96a0-d22ca8bca63f-kube-api-access-jfxkp\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.204008 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.204077 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-svc\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.204126 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.204147 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.204947 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.205077 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-config\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.215264 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.215585 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-svc\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.215913 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.217174 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-config\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.232705 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56ff9c89dc-jttpz"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.239590 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb8hn\" (UniqueName: \"kubernetes.io/projected/62b7e06f-b840-408c-b026-a086b975812f-kube-api-access-pb8hn\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.245463 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.251407 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56ff9c89dc-jttpz"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.284806 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfxkp\" (UniqueName: \"kubernetes.io/projected/1e547740-a536-4d48-96a0-d22ca8bca63f-kube-api-access-jfxkp\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.312828 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-config-data\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.312910 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-scripts\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.312934 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93eadf-9c52-436f-8dcc-16a7ad976254-logs\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.312970 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb93eadf-9c52-436f-8dcc-16a7ad976254-horizon-secret-key\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.313018 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd6wf\" (UniqueName: \"kubernetes.io/projected/cb93eadf-9c52-436f-8dcc-16a7ad976254-kube-api-access-gd6wf\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.320829 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-g8f6m"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.321920 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.329406 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.330072 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9pmfh" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.334823 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g8f6m"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.355165 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.414739 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd6wf\" (UniqueName: \"kubernetes.io/projected/cb93eadf-9c52-436f-8dcc-16a7ad976254-kube-api-access-gd6wf\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.414823 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvlz\" (UniqueName: \"kubernetes.io/projected/57270c73-9e5a-4629-8c7a-85123438a067-kube-api-access-txvlz\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.414848 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-db-sync-config-data\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.414866 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-combined-ca-bundle\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.417681 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-config-data\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.417734 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-scripts\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.417756 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93eadf-9c52-436f-8dcc-16a7ad976254-logs\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.417777 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb93eadf-9c52-436f-8dcc-16a7ad976254-horizon-secret-key\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.419238 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-config-data\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.419318 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-scripts\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.419466 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93eadf-9c52-436f-8dcc-16a7ad976254-logs\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.423539 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb93eadf-9c52-436f-8dcc-16a7ad976254-horizon-secret-key\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.462463 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd6wf\" (UniqueName: \"kubernetes.io/projected/cb93eadf-9c52-436f-8dcc-16a7ad976254-kube-api-access-gd6wf\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.509202 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.519741 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txvlz\" (UniqueName: \"kubernetes.io/projected/57270c73-9e5a-4629-8c7a-85123438a067-kube-api-access-txvlz\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.519788 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-db-sync-config-data\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.519811 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-combined-ca-bundle\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.534508 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-combined-ca-bundle\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.547224 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-db-sync-config-data\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.551251 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txvlz\" (UniqueName: \"kubernetes.io/projected/57270c73-9e5a-4629-8c7a-85123438a067-kube-api-access-txvlz\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.632296 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.647324 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.800331 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4hsxg"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.818532 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-z9bxc"] Jan 24 07:11:55 crc kubenswrapper[4675]: W0124 07:11:55.861393 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod871f5758_f078_4271_acb9_e5ca8bfdc2eb.slice/crio-f84662ad3ec2c456c286e916ae9aa92e43378ed477168ce624ec721241a6bc5e WatchSource:0}: Error finding container f84662ad3ec2c456c286e916ae9aa92e43378ed477168ce624ec721241a6bc5e: Status 404 returned error can't find the container with id f84662ad3ec2c456c286e916ae9aa92e43378ed477168ce624ec721241a6bc5e Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.937850 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" event={"ID":"6602e4dc-7422-48ec-9a0f-919faff36b4e","Type":"ContainerStarted","Data":"2a00eab4c015c2a71c3302cf901cc256a67db21e51a0bdeb53f6a384b0ab080c"} Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.950493 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" containerName="dnsmasq-dns" containerID="cri-o://0b3c07f23548cc340802bda7f8f284bbe8cb1d557506edd0e2bb72db39db55f1" gracePeriod=10 Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.950603 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4hsxg" event={"ID":"871f5758-f078-4271-acb9-e5ca8bfdc2eb","Type":"ContainerStarted","Data":"f84662ad3ec2c456c286e916ae9aa92e43378ed477168ce624ec721241a6bc5e"} Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.973734 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xr9fm"] Jan 24 07:11:55 crc kubenswrapper[4675]: W0124 07:11:55.998651 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7854ef09_5060_4534_96e2_2963cddcc691.slice/crio-20677f478245dfa1a37b92490323e7eeca93b47bd251fa0d61cdbbcd49c38718 WatchSource:0}: Error finding container 20677f478245dfa1a37b92490323e7eeca93b47bd251fa0d61cdbbcd49c38718: Status 404 returned error can't find the container with id 20677f478245dfa1a37b92490323e7eeca93b47bd251fa0d61cdbbcd49c38718 Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.319908 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-td45s"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.406870 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d69f445c7-kqzw8"] Jan 24 07:11:56 crc kubenswrapper[4675]: W0124 07:11:56.429474 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d590a0d_6c41_407a_8e89_3e7b9a64a3f7.slice/crio-c38917fea91ac9c2e54b191a74e3d4deadf5294f5614422ff9a1c2dc377a8acc WatchSource:0}: Error finding container c38917fea91ac9c2e54b191a74e3d4deadf5294f5614422ff9a1c2dc377a8acc: Status 404 returned error can't find the container with id c38917fea91ac9c2e54b191a74e3d4deadf5294f5614422ff9a1c2dc377a8acc Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.434549 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-58bxq"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.440255 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-fp9qw"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.555548 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.587231 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g8f6m"] Jan 24 07:11:56 crc kubenswrapper[4675]: W0124 07:11:56.606999 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb93eadf_9c52_436f_8dcc_16a7ad976254.slice/crio-9c43ff8ad709d592e92a1a98ded72a28038dba8ee627cd90181542ae456c6e98 WatchSource:0}: Error finding container 9c43ff8ad709d592e92a1a98ded72a28038dba8ee627cd90181542ae456c6e98: Status 404 returned error can't find the container with id 9c43ff8ad709d592e92a1a98ded72a28038dba8ee627cd90181542ae456c6e98 Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.607141 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56ff9c89dc-jttpz"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.933485 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56ff9c89dc-jttpz"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.959410 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.987401 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d69f445c7-kqzw8" event={"ID":"c4464d27-9360-4f78-92cd-3b9d11204ec2","Type":"ContainerStarted","Data":"6300eddfd5812e2ef5a13cb0e83a7dac291f0af984180d712cb6ee55436346f3"} Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.992435 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66b6dd9b6f-mms9h"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.994374 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.012075 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4hsxg" event={"ID":"871f5758-f078-4271-acb9-e5ca8bfdc2eb","Type":"ContainerStarted","Data":"fdb88fe5e8d5c3d574f7618a944551c7b762f983498c9ea3e4b037a53bfad902"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.024355 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-58bxq" event={"ID":"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7","Type":"ContainerStarted","Data":"c38917fea91ac9c2e54b191a74e3d4deadf5294f5614422ff9a1c2dc377a8acc"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.025856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56ff9c89dc-jttpz" event={"ID":"cb93eadf-9c52-436f-8dcc-16a7ad976254","Type":"ContainerStarted","Data":"9c43ff8ad709d592e92a1a98ded72a28038dba8ee627cd90181542ae456c6e98"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.030054 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-td45s" event={"ID":"1e547740-a536-4d48-96a0-d22ca8bca63f","Type":"ContainerStarted","Data":"24c458e8c623625a4811d5039f766381aacba2ba8b89fd1c0b0f9eef580b418e"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.030615 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-td45s" event={"ID":"1e547740-a536-4d48-96a0-d22ca8bca63f","Type":"ContainerStarted","Data":"9da5085f19a402a7076d2b62d3720d8bab0822f44dc0a613a31fb3c57b813329"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.040324 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fp9qw" event={"ID":"f54df341-915c-4505-bd2e-81923b07a2be","Type":"ContainerStarted","Data":"112402427a5eb414fe7cfc4f30de89d1b0218f39fa69ddaa6dd77168312cb7ae"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.043572 4675 generic.go:334] "Generic (PLEG): container finished" podID="6391092e-27b4-4604-8a19-416c5073b6bf" containerID="0b3c07f23548cc340802bda7f8f284bbe8cb1d557506edd0e2bb72db39db55f1" exitCode=0 Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.043676 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" event={"ID":"6391092e-27b4-4604-8a19-416c5073b6bf","Type":"ContainerDied","Data":"0b3c07f23548cc340802bda7f8f284bbe8cb1d557506edd0e2bb72db39db55f1"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.056866 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66b6dd9b6f-mms9h"] Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.094089 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-config-data\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.094146 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkt2l\" (UniqueName: \"kubernetes.io/projected/f7a6babb-0cb5-4967-9e60-749d73be754b-kube-api-access-xkt2l\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.094233 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a6babb-0cb5-4967-9e60-749d73be754b-logs\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.094291 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-scripts\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.094326 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7a6babb-0cb5-4967-9e60-749d73be754b-horizon-secret-key\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.098919 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" event={"ID":"6602e4dc-7422-48ec-9a0f-919faff36b4e","Type":"ContainerStarted","Data":"30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.099094 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" podUID="6602e4dc-7422-48ec-9a0f-919faff36b4e" containerName="init" containerID="cri-o://30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95" gracePeriod=10 Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.125848 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerStarted","Data":"0ad4338e6f939f6bda642b2d5397708669ef3b6004444834c598ae8f3b747800"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.132177 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4hsxg" podStartSLOduration=3.132154694 podStartE2EDuration="3.132154694s" podCreationTimestamp="2026-01-24 07:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:57.097400125 +0000 UTC m=+1118.393505348" watchObservedRunningTime="2026-01-24 07:11:57.132154694 +0000 UTC m=+1118.428259917" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.141346 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g8f6m" event={"ID":"57270c73-9e5a-4629-8c7a-85123438a067","Type":"ContainerStarted","Data":"5b259eb76af8e66f76ee1bcfd7ccd3f155f31927bbacf08cb7666192371fbd27"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222770 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xr9fm" event={"ID":"7854ef09-5060-4534-96e2-2963cddcc691","Type":"ContainerStarted","Data":"6e86539bbdd5da050dd7b36207c60522a769e1e7ac856b3f85e7b51da5db45a6"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222804 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xr9fm" event={"ID":"7854ef09-5060-4534-96e2-2963cddcc691","Type":"ContainerStarted","Data":"20677f478245dfa1a37b92490323e7eeca93b47bd251fa0d61cdbbcd49c38718"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222836 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-config-data\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222880 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkt2l\" (UniqueName: \"kubernetes.io/projected/f7a6babb-0cb5-4967-9e60-749d73be754b-kube-api-access-xkt2l\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222912 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a6babb-0cb5-4967-9e60-749d73be754b-logs\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222937 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-scripts\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222957 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7a6babb-0cb5-4967-9e60-749d73be754b-horizon-secret-key\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.225371 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-config-data\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.225603 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a6babb-0cb5-4967-9e60-749d73be754b-logs\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.226039 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-scripts\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.244866 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkt2l\" (UniqueName: \"kubernetes.io/projected/f7a6babb-0cb5-4967-9e60-749d73be754b-kube-api-access-xkt2l\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.255220 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xr9fm" podStartSLOduration=3.255206062 podStartE2EDuration="3.255206062s" podCreationTimestamp="2026-01-24 07:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:57.253138891 +0000 UTC m=+1118.549244114" watchObservedRunningTime="2026-01-24 07:11:57.255206062 +0000 UTC m=+1118.551311285" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.258391 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7a6babb-0cb5-4967-9e60-749d73be754b-horizon-secret-key\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.319624 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.459549 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.534151 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-nb\") pod \"6391092e-27b4-4604-8a19-416c5073b6bf\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.534290 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-sb\") pod \"6391092e-27b4-4604-8a19-416c5073b6bf\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.534398 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-config\") pod \"6391092e-27b4-4604-8a19-416c5073b6bf\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.534432 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-swift-storage-0\") pod \"6391092e-27b4-4604-8a19-416c5073b6bf\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.534479 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtjqz\" (UniqueName: \"kubernetes.io/projected/6391092e-27b4-4604-8a19-416c5073b6bf-kube-api-access-vtjqz\") pod \"6391092e-27b4-4604-8a19-416c5073b6bf\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.534529 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-svc\") pod \"6391092e-27b4-4604-8a19-416c5073b6bf\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.587445 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6391092e-27b4-4604-8a19-416c5073b6bf-kube-api-access-vtjqz" (OuterVolumeSpecName: "kube-api-access-vtjqz") pod "6391092e-27b4-4604-8a19-416c5073b6bf" (UID: "6391092e-27b4-4604-8a19-416c5073b6bf"). InnerVolumeSpecName "kube-api-access-vtjqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.615531 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6391092e-27b4-4604-8a19-416c5073b6bf" (UID: "6391092e-27b4-4604-8a19-416c5073b6bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.655929 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6391092e-27b4-4604-8a19-416c5073b6bf" (UID: "6391092e-27b4-4604-8a19-416c5073b6bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.657658 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtjqz\" (UniqueName: \"kubernetes.io/projected/6391092e-27b4-4604-8a19-416c5073b6bf-kube-api-access-vtjqz\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.657703 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.657760 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.712989 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-config" (OuterVolumeSpecName: "config") pod "6391092e-27b4-4604-8a19-416c5073b6bf" (UID: "6391092e-27b4-4604-8a19-416c5073b6bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.735485 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.753150 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6391092e-27b4-4604-8a19-416c5073b6bf" (UID: "6391092e-27b4-4604-8a19-416c5073b6bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.762064 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.762109 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.787492 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6391092e-27b4-4604-8a19-416c5073b6bf" (UID: "6391092e-27b4-4604-8a19-416c5073b6bf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.863438 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-config\") pod \"6602e4dc-7422-48ec-9a0f-919faff36b4e\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.863492 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqh4l\" (UniqueName: \"kubernetes.io/projected/6602e4dc-7422-48ec-9a0f-919faff36b4e-kube-api-access-zqh4l\") pod \"6602e4dc-7422-48ec-9a0f-919faff36b4e\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.863530 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-swift-storage-0\") pod \"6602e4dc-7422-48ec-9a0f-919faff36b4e\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.863564 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-sb\") pod \"6602e4dc-7422-48ec-9a0f-919faff36b4e\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.863635 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-nb\") pod \"6602e4dc-7422-48ec-9a0f-919faff36b4e\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.863809 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-svc\") pod \"6602e4dc-7422-48ec-9a0f-919faff36b4e\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.864180 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.880928 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6602e4dc-7422-48ec-9a0f-919faff36b4e-kube-api-access-zqh4l" (OuterVolumeSpecName: "kube-api-access-zqh4l") pod "6602e4dc-7422-48ec-9a0f-919faff36b4e" (UID: "6602e4dc-7422-48ec-9a0f-919faff36b4e"). InnerVolumeSpecName "kube-api-access-zqh4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.900286 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-config" (OuterVolumeSpecName: "config") pod "6602e4dc-7422-48ec-9a0f-919faff36b4e" (UID: "6602e4dc-7422-48ec-9a0f-919faff36b4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.901245 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6602e4dc-7422-48ec-9a0f-919faff36b4e" (UID: "6602e4dc-7422-48ec-9a0f-919faff36b4e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.902134 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6602e4dc-7422-48ec-9a0f-919faff36b4e" (UID: "6602e4dc-7422-48ec-9a0f-919faff36b4e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.921020 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6602e4dc-7422-48ec-9a0f-919faff36b4e" (UID: "6602e4dc-7422-48ec-9a0f-919faff36b4e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.921046 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6602e4dc-7422-48ec-9a0f-919faff36b4e" (UID: "6602e4dc-7422-48ec-9a0f-919faff36b4e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.966085 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.966118 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.966128 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.966137 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.966145 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.966153 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqh4l\" (UniqueName: \"kubernetes.io/projected/6602e4dc-7422-48ec-9a0f-919faff36b4e-kube-api-access-zqh4l\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.158057 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66b6dd9b6f-mms9h"] Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.246030 4675 generic.go:334] "Generic (PLEG): container finished" podID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerID="24c458e8c623625a4811d5039f766381aacba2ba8b89fd1c0b0f9eef580b418e" exitCode=0 Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.246102 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-td45s" event={"ID":"1e547740-a536-4d48-96a0-d22ca8bca63f","Type":"ContainerDied","Data":"24c458e8c623625a4811d5039f766381aacba2ba8b89fd1c0b0f9eef580b418e"} Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.292380 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" event={"ID":"6391092e-27b4-4604-8a19-416c5073b6bf","Type":"ContainerDied","Data":"0ab1f402eb6037ac10eccabab78ec60fe868a1b4dcab74a8478319dd857695e3"} Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.292441 4675 scope.go:117] "RemoveContainer" containerID="0b3c07f23548cc340802bda7f8f284bbe8cb1d557506edd0e2bb72db39db55f1" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.292598 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.319186 4675 generic.go:334] "Generic (PLEG): container finished" podID="6602e4dc-7422-48ec-9a0f-919faff36b4e" containerID="30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95" exitCode=0 Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.319239 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" event={"ID":"6602e4dc-7422-48ec-9a0f-919faff36b4e","Type":"ContainerDied","Data":"30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95"} Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.319265 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" event={"ID":"6602e4dc-7422-48ec-9a0f-919faff36b4e","Type":"ContainerDied","Data":"2a00eab4c015c2a71c3302cf901cc256a67db21e51a0bdeb53f6a384b0ab080c"} Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.319318 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.337231 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66b6dd9b6f-mms9h" event={"ID":"f7a6babb-0cb5-4967-9e60-749d73be754b","Type":"ContainerStarted","Data":"3dbb003cca25be0a35bc048d9a61e606b4e3d23ac2e4ee99addd88a24699871f"} Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.391001 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-njlcw"] Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.413875 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-njlcw"] Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.487145 4675 scope.go:117] "RemoveContainer" containerID="495b578ee6202ae9668863232052b610d816ffa68e81d10913cfb8812139ec2f" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.491602 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-z9bxc"] Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.530969 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-z9bxc"] Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.573934 4675 scope.go:117] "RemoveContainer" containerID="30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.675776 4675 scope.go:117] "RemoveContainer" containerID="30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95" Jan 24 07:11:58 crc kubenswrapper[4675]: E0124 07:11:58.676395 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95\": container with ID starting with 30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95 not found: ID does not exist" containerID="30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.676440 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95"} err="failed to get container status \"30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95\": rpc error: code = NotFound desc = could not find container \"30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95\": container with ID starting with 30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95 not found: ID does not exist" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.959415 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" path="/var/lib/kubelet/pods/6391092e-27b4-4604-8a19-416c5073b6bf/volumes" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.960617 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6602e4dc-7422-48ec-9a0f-919faff36b4e" path="/var/lib/kubelet/pods/6602e4dc-7422-48ec-9a0f-919faff36b4e/volumes" Jan 24 07:11:59 crc kubenswrapper[4675]: I0124 07:11:59.347451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-td45s" event={"ID":"1e547740-a536-4d48-96a0-d22ca8bca63f","Type":"ContainerStarted","Data":"6ec4d7d14d6db0071695417a61ee609ea081b6e7e348c32a11719b766b0525e1"} Jan 24 07:11:59 crc kubenswrapper[4675]: I0124 07:11:59.348477 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:12:02 crc kubenswrapper[4675]: I0124 07:12:02.423325 4675 generic.go:334] "Generic (PLEG): container finished" podID="7e53c5a1-6293-46d9-9783-e7d183050152" containerID="f2b78394bf1beb82b28dc55cf3863a1ec788f53b7447575aebefd50f08d7bb67" exitCode=0 Jan 24 07:12:02 crc kubenswrapper[4675]: I0124 07:12:02.423413 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-95xkb" event={"ID":"7e53c5a1-6293-46d9-9783-e7d183050152","Type":"ContainerDied","Data":"f2b78394bf1beb82b28dc55cf3863a1ec788f53b7447575aebefd50f08d7bb67"} Jan 24 07:12:02 crc kubenswrapper[4675]: I0124 07:12:02.449988 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-td45s" podStartSLOduration=8.449966934 podStartE2EDuration="8.449966934s" podCreationTimestamp="2026-01-24 07:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:59.369921956 +0000 UTC m=+1120.666027179" watchObservedRunningTime="2026-01-24 07:12:02.449966934 +0000 UTC m=+1123.746072157" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.784520 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d69f445c7-kqzw8"] Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.810458 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6565db7666-dt2lk"] Jan 24 07:12:04 crc kubenswrapper[4675]: E0124 07:12:03.812157 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6602e4dc-7422-48ec-9a0f-919faff36b4e" containerName="init" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.812172 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6602e4dc-7422-48ec-9a0f-919faff36b4e" containerName="init" Jan 24 07:12:04 crc kubenswrapper[4675]: E0124 07:12:03.812203 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" containerName="dnsmasq-dns" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.812209 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" containerName="dnsmasq-dns" Jan 24 07:12:04 crc kubenswrapper[4675]: E0124 07:12:03.812222 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" containerName="init" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.812227 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" containerName="init" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.812376 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6602e4dc-7422-48ec-9a0f-919faff36b4e" containerName="init" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.812398 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" containerName="dnsmasq-dns" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.813258 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.815853 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.832085 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6565db7666-dt2lk"] Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.924401 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66b6dd9b6f-mms9h"] Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938066 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-scripts\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938124 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-secret-key\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938153 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6462a086-070a-4998-8a59-cb4ccbf19867-logs\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938177 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-tls-certs\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938306 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-config-data\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938349 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-combined-ca-bundle\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938413 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw79p\" (UniqueName: \"kubernetes.io/projected/6462a086-070a-4998-8a59-cb4ccbf19867-kube-api-access-kw79p\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.971099 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-656ff794dd-jx8ld"] Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.972472 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.022237 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-656ff794dd-jx8ld"] Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048132 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw79p\" (UniqueName: \"kubernetes.io/projected/6462a086-070a-4998-8a59-cb4ccbf19867-kube-api-access-kw79p\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048777 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-scripts\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048829 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-secret-key\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048855 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6462a086-070a-4998-8a59-cb4ccbf19867-logs\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048878 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-tls-certs\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048949 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-config-data\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048985 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-combined-ca-bundle\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.050600 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6462a086-070a-4998-8a59-cb4ccbf19867-logs\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.051195 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-scripts\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.054914 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-config-data\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.060702 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-combined-ca-bundle\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.076234 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-secret-key\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.080534 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-tls-certs\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.093670 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw79p\" (UniqueName: \"kubernetes.io/projected/6462a086-070a-4998-8a59-cb4ccbf19867-kube-api-access-kw79p\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.147197 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151626 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b7e7730-0a42-48b0-bb7e-da95eb915126-logs\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151814 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b7e7730-0a42-48b0-bb7e-da95eb915126-config-data\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151837 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-horizon-secret-key\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151860 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b7e7730-0a42-48b0-bb7e-da95eb915126-scripts\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151878 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-combined-ca-bundle\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151906 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-horizon-tls-certs\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151973 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94jhc\" (UniqueName: \"kubernetes.io/projected/4b7e7730-0a42-48b0-bb7e-da95eb915126-kube-api-access-94jhc\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253491 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b7e7730-0a42-48b0-bb7e-da95eb915126-logs\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253564 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b7e7730-0a42-48b0-bb7e-da95eb915126-config-data\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253586 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-horizon-secret-key\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253609 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b7e7730-0a42-48b0-bb7e-da95eb915126-scripts\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253628 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-combined-ca-bundle\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253649 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-horizon-tls-certs\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253698 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94jhc\" (UniqueName: \"kubernetes.io/projected/4b7e7730-0a42-48b0-bb7e-da95eb915126-kube-api-access-94jhc\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.254434 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b7e7730-0a42-48b0-bb7e-da95eb915126-logs\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.255594 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b7e7730-0a42-48b0-bb7e-da95eb915126-config-data\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.257393 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b7e7730-0a42-48b0-bb7e-da95eb915126-scripts\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.262496 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-horizon-secret-key\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.267904 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-combined-ca-bundle\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.268273 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-horizon-tls-certs\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.288572 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94jhc\" (UniqueName: \"kubernetes.io/projected/4b7e7730-0a42-48b0-bb7e-da95eb915126-kube-api-access-94jhc\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.310223 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.450356 4675 generic.go:334] "Generic (PLEG): container finished" podID="7854ef09-5060-4534-96e2-2963cddcc691" containerID="6e86539bbdd5da050dd7b36207c60522a769e1e7ac856b3f85e7b51da5db45a6" exitCode=0 Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.450406 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xr9fm" event={"ID":"7854ef09-5060-4534-96e2-2963cddcc691","Type":"ContainerDied","Data":"6e86539bbdd5da050dd7b36207c60522a769e1e7ac856b3f85e7b51da5db45a6"} Jan 24 07:12:05 crc kubenswrapper[4675]: I0124 07:12:05.357980 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:12:05 crc kubenswrapper[4675]: E0124 07:12:05.395494 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:12:05 crc kubenswrapper[4675]: I0124 07:12:05.441214 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mtp78"] Jan 24 07:12:05 crc kubenswrapper[4675]: I0124 07:12:05.441539 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" containerID="cri-o://c6e71287ec7fd966046c5d90ff95c855b676a7ce9888a7f83191c7628a04df41" gracePeriod=10 Jan 24 07:12:05 crc kubenswrapper[4675]: I0124 07:12:05.641348 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 24 07:12:06 crc kubenswrapper[4675]: I0124 07:12:06.487197 4675 generic.go:334] "Generic (PLEG): container finished" podID="b1e65888-5032-411e-8910-5438e0aff32f" containerID="c6e71287ec7fd966046c5d90ff95c855b676a7ce9888a7f83191c7628a04df41" exitCode=0 Jan 24 07:12:06 crc kubenswrapper[4675]: I0124 07:12:06.487401 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" event={"ID":"b1e65888-5032-411e-8910-5438e0aff32f","Type":"ContainerDied","Data":"c6e71287ec7fd966046c5d90ff95c855b676a7ce9888a7f83191c7628a04df41"} Jan 24 07:12:08 crc kubenswrapper[4675]: I0124 07:12:08.630445 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:12:08 crc kubenswrapper[4675]: I0124 07:12:08.630797 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.438363 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-95xkb" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.498375 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-config-data\") pod \"7e53c5a1-6293-46d9-9783-e7d183050152\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.499283 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-db-sync-config-data\") pod \"7e53c5a1-6293-46d9-9783-e7d183050152\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.500009 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-combined-ca-bundle\") pod \"7e53c5a1-6293-46d9-9783-e7d183050152\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.500094 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6jbt\" (UniqueName: \"kubernetes.io/projected/7e53c5a1-6293-46d9-9783-e7d183050152-kube-api-access-f6jbt\") pod \"7e53c5a1-6293-46d9-9783-e7d183050152\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.504121 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7e53c5a1-6293-46d9-9783-e7d183050152" (UID: "7e53c5a1-6293-46d9-9783-e7d183050152"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.506964 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e53c5a1-6293-46d9-9783-e7d183050152-kube-api-access-f6jbt" (OuterVolumeSpecName: "kube-api-access-f6jbt") pod "7e53c5a1-6293-46d9-9783-e7d183050152" (UID: "7e53c5a1-6293-46d9-9783-e7d183050152"). InnerVolumeSpecName "kube-api-access-f6jbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.531137 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-95xkb" event={"ID":"7e53c5a1-6293-46d9-9783-e7d183050152","Type":"ContainerDied","Data":"aba894555e46ec22e81cf8b996b4a30e472efbd5119d3ffa6f69ee65a8d156ee"} Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.531186 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aba894555e46ec22e81cf8b996b4a30e472efbd5119d3ffa6f69ee65a8d156ee" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.531260 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-95xkb" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.546799 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e53c5a1-6293-46d9-9783-e7d183050152" (UID: "7e53c5a1-6293-46d9-9783-e7d183050152"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.604662 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-config-data" (OuterVolumeSpecName: "config-data") pod "7e53c5a1-6293-46d9-9783-e7d183050152" (UID: "7e53c5a1-6293-46d9-9783-e7d183050152"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.605026 4675 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.606627 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.606639 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6jbt\" (UniqueName: \"kubernetes.io/projected/7e53c5a1-6293-46d9-9783-e7d183050152-kube-api-access-f6jbt\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.642155 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.709341 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:11 crc kubenswrapper[4675]: I0124 07:12:11.894338 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-c2j5t"] Jan 24 07:12:11 crc kubenswrapper[4675]: E0124 07:12:11.896765 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e53c5a1-6293-46d9-9783-e7d183050152" containerName="glance-db-sync" Jan 24 07:12:11 crc kubenswrapper[4675]: I0124 07:12:11.896795 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e53c5a1-6293-46d9-9783-e7d183050152" containerName="glance-db-sync" Jan 24 07:12:11 crc kubenswrapper[4675]: I0124 07:12:11.897004 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e53c5a1-6293-46d9-9783-e7d183050152" containerName="glance-db-sync" Jan 24 07:12:11 crc kubenswrapper[4675]: I0124 07:12:11.897875 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:11 crc kubenswrapper[4675]: I0124 07:12:11.927788 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-c2j5t"] Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.040926 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.041037 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhlvr\" (UniqueName: \"kubernetes.io/projected/b4b87366-fdf2-4654-aab4-efa74076b162-kube-api-access-qhlvr\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.041082 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.041128 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-config\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.041191 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.041275 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.142289 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-config\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.142381 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.142456 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.142532 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.142565 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhlvr\" (UniqueName: \"kubernetes.io/projected/b4b87366-fdf2-4654-aab4-efa74076b162-kube-api-access-qhlvr\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.142594 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.143380 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-config\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.143525 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.144202 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.144206 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.147405 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.189841 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhlvr\" (UniqueName: \"kubernetes.io/projected/b4b87366-fdf2-4654-aab4-efa74076b162-kube-api-access-qhlvr\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.229178 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.975356 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.976774 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.979943 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.980156 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.980222 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wwtw8" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:12.999377 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.061793 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.063943 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.066230 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.098390 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.099599 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-config-data\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.099655 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.099740 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.099848 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-scripts\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.099950 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-logs\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.100079 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5x7d\" (UniqueName: \"kubernetes.io/projected/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-kube-api-access-f5x7d\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.100134 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202182 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-config-data\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202234 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202261 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202277 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202293 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lxbn\" (UniqueName: \"kubernetes.io/projected/4c064344-984b-40fd-9a3b-503d8e1531fd-kube-api-access-5lxbn\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202349 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202374 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202402 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-scripts\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202428 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202451 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-logs\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202468 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-logs\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202507 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202536 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5x7d\" (UniqueName: \"kubernetes.io/projected/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-kube-api-access-f5x7d\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202567 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202837 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.205687 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.205805 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-logs\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.210667 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-config-data\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.218900 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5x7d\" (UniqueName: \"kubernetes.io/projected/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-kube-api-access-f5x7d\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.219099 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-scripts\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.230393 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.237539 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304087 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304145 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304169 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lxbn\" (UniqueName: \"kubernetes.io/projected/4c064344-984b-40fd-9a3b-503d8e1531fd-kube-api-access-5lxbn\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304589 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304654 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304669 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304686 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-logs\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304793 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304970 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-logs\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.305060 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.305381 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.310344 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.311072 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.317977 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.326454 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lxbn\" (UniqueName: \"kubernetes.io/projected/4c064344-984b-40fd-9a3b-503d8e1531fd-kube-api-access-5lxbn\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.373913 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.378148 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:14 crc kubenswrapper[4675]: I0124 07:12:14.759369 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:14 crc kubenswrapper[4675]: I0124 07:12:14.840513 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:15 crc kubenswrapper[4675]: I0124 07:12:15.640706 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 24 07:12:15 crc kubenswrapper[4675]: I0124 07:12:15.641519 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:12:15 crc kubenswrapper[4675]: E0124 07:12:15.645000 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.761841 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.762366 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n675h645hbfh8ch8h97h656h56dh675h545h8bh77hbbh6h546h9ch55fh5f5h5dfh5ch594h6ch65h64bhch5fbh5b6h5c5h8fh54h5f7hfcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gd6wf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-56ff9c89dc-jttpz_openstack(cb93eadf-9c52-436f-8dcc-16a7ad976254): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.787195 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-56ff9c89dc-jttpz" podUID="cb93eadf-9c52-436f-8dcc-16a7ad976254" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.788325 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.788459 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c4hbfh5cfh666h547h667h57chddh84h7dh8fh546h99h5fch55bhd4h549h687hd9h666h59chfdhbdh5bdh9fh58hb9h57dh5b7h67dhc8h576q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ps82j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-d69f445c7-kqzw8_openstack(c4464d27-9360-4f78-92cd-3b9d11204ec2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.797459 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-d69f445c7-kqzw8" podUID="c4464d27-9360-4f78-92cd-3b9d11204ec2" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.812217 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.812401 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n656h5bfh699h575hcbh68fhfdh5bfh5cfh9chbh66fh658hf4hf5h88h67h65fh54bhc4h645hd9h546h68dh8fh5d9h85h89h6fh587h548h5fcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkt2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-66b6dd9b6f-mms9h_openstack(f7a6babb-0cb5-4967-9e60-749d73be754b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.817605 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-66b6dd9b6f-mms9h" podUID="f7a6babb-0cb5-4967-9e60-749d73be754b" Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.853980 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.990651 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-scripts\") pod \"7854ef09-5060-4534-96e2-2963cddcc691\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.990776 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-credential-keys\") pod \"7854ef09-5060-4534-96e2-2963cddcc691\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.990858 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-config-data\") pod \"7854ef09-5060-4534-96e2-2963cddcc691\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.991346 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-fernet-keys\") pod \"7854ef09-5060-4534-96e2-2963cddcc691\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.992092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-combined-ca-bundle\") pod \"7854ef09-5060-4534-96e2-2963cddcc691\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.992179 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bms97\" (UniqueName: \"kubernetes.io/projected/7854ef09-5060-4534-96e2-2963cddcc691-kube-api-access-bms97\") pod \"7854ef09-5060-4534-96e2-2963cddcc691\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.996869 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7854ef09-5060-4534-96e2-2963cddcc691" (UID: "7854ef09-5060-4534-96e2-2963cddcc691"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.997035 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7854ef09-5060-4534-96e2-2963cddcc691" (UID: "7854ef09-5060-4534-96e2-2963cddcc691"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.005882 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-scripts" (OuterVolumeSpecName: "scripts") pod "7854ef09-5060-4534-96e2-2963cddcc691" (UID: "7854ef09-5060-4534-96e2-2963cddcc691"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.023967 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7854ef09-5060-4534-96e2-2963cddcc691-kube-api-access-bms97" (OuterVolumeSpecName: "kube-api-access-bms97") pod "7854ef09-5060-4534-96e2-2963cddcc691" (UID: "7854ef09-5060-4534-96e2-2963cddcc691"). InnerVolumeSpecName "kube-api-access-bms97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.032280 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-config-data" (OuterVolumeSpecName: "config-data") pod "7854ef09-5060-4534-96e2-2963cddcc691" (UID: "7854ef09-5060-4534-96e2-2963cddcc691"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.032825 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7854ef09-5060-4534-96e2-2963cddcc691" (UID: "7854ef09-5060-4534-96e2-2963cddcc691"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.095937 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bms97\" (UniqueName: \"kubernetes.io/projected/7854ef09-5060-4534-96e2-2963cddcc691-kube-api-access-bms97\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.095983 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.095997 4675 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.096010 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.096022 4675 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.096030 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.606521 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.609093 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xr9fm" event={"ID":"7854ef09-5060-4534-96e2-2963cddcc691","Type":"ContainerDied","Data":"20677f478245dfa1a37b92490323e7eeca93b47bd251fa0d61cdbbcd49c38718"} Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.609145 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20677f478245dfa1a37b92490323e7eeca93b47bd251fa0d61cdbbcd49c38718" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.974996 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xr9fm"] Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.032094 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xr9fm"] Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.063029 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-v7kb4"] Jan 24 07:12:18 crc kubenswrapper[4675]: E0124 07:12:18.064447 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7854ef09-5060-4534-96e2-2963cddcc691" containerName="keystone-bootstrap" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.064509 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7854ef09-5060-4534-96e2-2963cddcc691" containerName="keystone-bootstrap" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.064960 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7854ef09-5060-4534-96e2-2963cddcc691" containerName="keystone-bootstrap" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.065653 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.076050 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.080949 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.081154 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.081280 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ddgj4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.081780 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.102311 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v7kb4"] Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.233463 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-fernet-keys\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.233516 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-scripts\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.233604 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-config-data\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.233672 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8g5j\" (UniqueName: \"kubernetes.io/projected/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-kube-api-access-x8g5j\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.233761 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-credential-keys\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.233851 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-combined-ca-bundle\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.336156 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-fernet-keys\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.336191 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-scripts\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.336250 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-config-data\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.336296 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8g5j\" (UniqueName: \"kubernetes.io/projected/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-kube-api-access-x8g5j\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.336335 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-credential-keys\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.336492 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-combined-ca-bundle\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.342000 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-scripts\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.342205 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-fernet-keys\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.344250 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-combined-ca-bundle\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.345544 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-config-data\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.352231 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-credential-keys\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.356277 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8g5j\" (UniqueName: \"kubernetes.io/projected/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-kube-api-access-x8g5j\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.404685 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.963090 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7854ef09-5060-4534-96e2-2963cddcc691" path="/var/lib/kubelet/pods/7854ef09-5060-4534-96e2-2963cddcc691/volumes" Jan 24 07:12:21 crc kubenswrapper[4675]: I0124 07:12:21.644254 4675 generic.go:334] "Generic (PLEG): container finished" podID="871f5758-f078-4271-acb9-e5ca8bfdc2eb" containerID="fdb88fe5e8d5c3d574f7618a944551c7b762f983498c9ea3e4b037a53bfad902" exitCode=0 Jan 24 07:12:21 crc kubenswrapper[4675]: I0124 07:12:21.644434 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4hsxg" event={"ID":"871f5758-f078-4271-acb9-e5ca8bfdc2eb","Type":"ContainerDied","Data":"fdb88fe5e8d5c3d574f7618a944551c7b762f983498c9ea3e4b037a53bfad902"} Jan 24 07:12:25 crc kubenswrapper[4675]: I0124 07:12:25.641356 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 24 07:12:30 crc kubenswrapper[4675]: I0124 07:12:30.642579 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 24 07:12:31 crc kubenswrapper[4675]: E0124 07:12:31.717958 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 24 07:12:31 crc kubenswrapper[4675]: E0124 07:12:31.718124 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c8h67dhfch57bh667h676h89h5d4h5b7h5b8h8fhc8h5bbhfh54dh66chcdh695h98hc4h579h8fhfch586h59dh5d4h64ch5ch595h9bhc7h94q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb8hn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(62b7e06f-b840-408c-b026-a086b975812f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:12:32 crc kubenswrapper[4675]: E0124 07:12:32.152605 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 24 07:12:32 crc kubenswrapper[4675]: E0124 07:12:32.157586 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txvlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-g8f6m_openstack(57270c73-9e5a-4629-8c7a-85123438a067): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:12:32 crc kubenswrapper[4675]: E0124 07:12:32.159064 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-g8f6m" podUID="57270c73-9e5a-4629-8c7a-85123438a067" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.336147 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.342091 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.347699 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.356132 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.366412 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.458957 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-config\") pod \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459001 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-scripts\") pod \"c4464d27-9360-4f78-92cd-3b9d11204ec2\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459030 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd6wf\" (UniqueName: \"kubernetes.io/projected/cb93eadf-9c52-436f-8dcc-16a7ad976254-kube-api-access-gd6wf\") pod \"cb93eadf-9c52-436f-8dcc-16a7ad976254\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459058 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-nb\") pod \"b1e65888-5032-411e-8910-5438e0aff32f\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459095 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a6babb-0cb5-4967-9e60-749d73be754b-logs\") pod \"f7a6babb-0cb5-4967-9e60-749d73be754b\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459143 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb93eadf-9c52-436f-8dcc-16a7ad976254-horizon-secret-key\") pod \"cb93eadf-9c52-436f-8dcc-16a7ad976254\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459165 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-scripts\") pod \"cb93eadf-9c52-436f-8dcc-16a7ad976254\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459194 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps82j\" (UniqueName: \"kubernetes.io/projected/c4464d27-9360-4f78-92cd-3b9d11204ec2-kube-api-access-ps82j\") pod \"c4464d27-9360-4f78-92cd-3b9d11204ec2\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459243 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-dns-svc\") pod \"b1e65888-5032-411e-8910-5438e0aff32f\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459263 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-combined-ca-bundle\") pod \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459282 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkt2l\" (UniqueName: \"kubernetes.io/projected/f7a6babb-0cb5-4967-9e60-749d73be754b-kube-api-access-xkt2l\") pod \"f7a6babb-0cb5-4967-9e60-749d73be754b\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459324 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93eadf-9c52-436f-8dcc-16a7ad976254-logs\") pod \"cb93eadf-9c52-436f-8dcc-16a7ad976254\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459344 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjfhw\" (UniqueName: \"kubernetes.io/projected/b1e65888-5032-411e-8910-5438e0aff32f-kube-api-access-gjfhw\") pod \"b1e65888-5032-411e-8910-5438e0aff32f\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459380 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4464d27-9360-4f78-92cd-3b9d11204ec2-horizon-secret-key\") pod \"c4464d27-9360-4f78-92cd-3b9d11204ec2\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459395 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-scripts\") pod \"f7a6babb-0cb5-4967-9e60-749d73be754b\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459423 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-config-data\") pod \"f7a6babb-0cb5-4967-9e60-749d73be754b\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459440 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4464d27-9360-4f78-92cd-3b9d11204ec2-logs\") pod \"c4464d27-9360-4f78-92cd-3b9d11204ec2\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459463 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-config\") pod \"b1e65888-5032-411e-8910-5438e0aff32f\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459481 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-config-data\") pod \"c4464d27-9360-4f78-92cd-3b9d11204ec2\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459499 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd8c6\" (UniqueName: \"kubernetes.io/projected/871f5758-f078-4271-acb9-e5ca8bfdc2eb-kube-api-access-xd8c6\") pod \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459527 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-config-data\") pod \"cb93eadf-9c52-436f-8dcc-16a7ad976254\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459586 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7a6babb-0cb5-4967-9e60-749d73be754b-horizon-secret-key\") pod \"f7a6babb-0cb5-4967-9e60-749d73be754b\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459610 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-sb\") pod \"b1e65888-5032-411e-8910-5438e0aff32f\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.461066 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4464d27-9360-4f78-92cd-3b9d11204ec2-logs" (OuterVolumeSpecName: "logs") pod "c4464d27-9360-4f78-92cd-3b9d11204ec2" (UID: "c4464d27-9360-4f78-92cd-3b9d11204ec2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.461424 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb93eadf-9c52-436f-8dcc-16a7ad976254-logs" (OuterVolumeSpecName: "logs") pod "cb93eadf-9c52-436f-8dcc-16a7ad976254" (UID: "cb93eadf-9c52-436f-8dcc-16a7ad976254"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.461455 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-config-data" (OuterVolumeSpecName: "config-data") pod "c4464d27-9360-4f78-92cd-3b9d11204ec2" (UID: "c4464d27-9360-4f78-92cd-3b9d11204ec2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.461555 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-scripts" (OuterVolumeSpecName: "scripts") pod "f7a6babb-0cb5-4967-9e60-749d73be754b" (UID: "f7a6babb-0cb5-4967-9e60-749d73be754b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.461654 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a6babb-0cb5-4967-9e60-749d73be754b-logs" (OuterVolumeSpecName: "logs") pod "f7a6babb-0cb5-4967-9e60-749d73be754b" (UID: "f7a6babb-0cb5-4967-9e60-749d73be754b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.462595 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-config-data" (OuterVolumeSpecName: "config-data") pod "cb93eadf-9c52-436f-8dcc-16a7ad976254" (UID: "cb93eadf-9c52-436f-8dcc-16a7ad976254"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.462705 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-config-data" (OuterVolumeSpecName: "config-data") pod "f7a6babb-0cb5-4967-9e60-749d73be754b" (UID: "f7a6babb-0cb5-4967-9e60-749d73be754b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.466497 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-scripts" (OuterVolumeSpecName: "scripts") pod "c4464d27-9360-4f78-92cd-3b9d11204ec2" (UID: "c4464d27-9360-4f78-92cd-3b9d11204ec2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.469199 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-scripts" (OuterVolumeSpecName: "scripts") pod "cb93eadf-9c52-436f-8dcc-16a7ad976254" (UID: "cb93eadf-9c52-436f-8dcc-16a7ad976254"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.480956 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb93eadf-9c52-436f-8dcc-16a7ad976254-kube-api-access-gd6wf" (OuterVolumeSpecName: "kube-api-access-gd6wf") pod "cb93eadf-9c52-436f-8dcc-16a7ad976254" (UID: "cb93eadf-9c52-436f-8dcc-16a7ad976254"). InnerVolumeSpecName "kube-api-access-gd6wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.481192 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871f5758-f078-4271-acb9-e5ca8bfdc2eb-kube-api-access-xd8c6" (OuterVolumeSpecName: "kube-api-access-xd8c6") pod "871f5758-f078-4271-acb9-e5ca8bfdc2eb" (UID: "871f5758-f078-4271-acb9-e5ca8bfdc2eb"). InnerVolumeSpecName "kube-api-access-xd8c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.487036 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4464d27-9360-4f78-92cd-3b9d11204ec2-kube-api-access-ps82j" (OuterVolumeSpecName: "kube-api-access-ps82j") pod "c4464d27-9360-4f78-92cd-3b9d11204ec2" (UID: "c4464d27-9360-4f78-92cd-3b9d11204ec2"). InnerVolumeSpecName "kube-api-access-ps82j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.489162 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a6babb-0cb5-4967-9e60-749d73be754b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f7a6babb-0cb5-4967-9e60-749d73be754b" (UID: "f7a6babb-0cb5-4967-9e60-749d73be754b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.489288 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb93eadf-9c52-436f-8dcc-16a7ad976254-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cb93eadf-9c52-436f-8dcc-16a7ad976254" (UID: "cb93eadf-9c52-436f-8dcc-16a7ad976254"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.498331 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a6babb-0cb5-4967-9e60-749d73be754b-kube-api-access-xkt2l" (OuterVolumeSpecName: "kube-api-access-xkt2l") pod "f7a6babb-0cb5-4967-9e60-749d73be754b" (UID: "f7a6babb-0cb5-4967-9e60-749d73be754b"). InnerVolumeSpecName "kube-api-access-xkt2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.512288 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4464d27-9360-4f78-92cd-3b9d11204ec2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c4464d27-9360-4f78-92cd-3b9d11204ec2" (UID: "c4464d27-9360-4f78-92cd-3b9d11204ec2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.530898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e65888-5032-411e-8910-5438e0aff32f-kube-api-access-gjfhw" (OuterVolumeSpecName: "kube-api-access-gjfhw") pod "b1e65888-5032-411e-8910-5438e0aff32f" (UID: "b1e65888-5032-411e-8910-5438e0aff32f"). InnerVolumeSpecName "kube-api-access-gjfhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.541803 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "871f5758-f078-4271-acb9-e5ca8bfdc2eb" (UID: "871f5758-f078-4271-acb9-e5ca8bfdc2eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.548860 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-config" (OuterVolumeSpecName: "config") pod "871f5758-f078-4271-acb9-e5ca8bfdc2eb" (UID: "871f5758-f078-4271-acb9-e5ca8bfdc2eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.554760 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b1e65888-5032-411e-8910-5438e0aff32f" (UID: "b1e65888-5032-411e-8910-5438e0aff32f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561310 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkt2l\" (UniqueName: \"kubernetes.io/projected/f7a6babb-0cb5-4967-9e60-749d73be754b-kube-api-access-xkt2l\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561353 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93eadf-9c52-436f-8dcc-16a7ad976254-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561368 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjfhw\" (UniqueName: \"kubernetes.io/projected/b1e65888-5032-411e-8910-5438e0aff32f-kube-api-access-gjfhw\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561379 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561389 4675 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4464d27-9360-4f78-92cd-3b9d11204ec2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561398 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4464d27-9360-4f78-92cd-3b9d11204ec2-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561407 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561416 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561424 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd8c6\" (UniqueName: \"kubernetes.io/projected/871f5758-f078-4271-acb9-e5ca8bfdc2eb-kube-api-access-xd8c6\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561432 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561441 4675 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7a6babb-0cb5-4967-9e60-749d73be754b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561449 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561457 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561465 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd6wf\" (UniqueName: \"kubernetes.io/projected/cb93eadf-9c52-436f-8dcc-16a7ad976254-kube-api-access-gd6wf\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561554 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561567 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a6babb-0cb5-4967-9e60-749d73be754b-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561576 4675 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb93eadf-9c52-436f-8dcc-16a7ad976254-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561584 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561593 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps82j\" (UniqueName: \"kubernetes.io/projected/c4464d27-9360-4f78-92cd-3b9d11204ec2-kube-api-access-ps82j\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561601 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.596670 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-config" (OuterVolumeSpecName: "config") pod "b1e65888-5032-411e-8910-5438e0aff32f" (UID: "b1e65888-5032-411e-8910-5438e0aff32f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.609798 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1e65888-5032-411e-8910-5438e0aff32f" (UID: "b1e65888-5032-411e-8910-5438e0aff32f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.626307 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b1e65888-5032-411e-8910-5438e0aff32f" (UID: "b1e65888-5032-411e-8910-5438e0aff32f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.663828 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.663896 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.663911 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.749201 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56ff9c89dc-jttpz" event={"ID":"cb93eadf-9c52-436f-8dcc-16a7ad976254","Type":"ContainerDied","Data":"9c43ff8ad709d592e92a1a98ded72a28038dba8ee627cd90181542ae456c6e98"} Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.749259 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.753529 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66b6dd9b6f-mms9h" event={"ID":"f7a6babb-0cb5-4967-9e60-749d73be754b","Type":"ContainerDied","Data":"3dbb003cca25be0a35bc048d9a61e606b4e3d23ac2e4ee99addd88a24699871f"} Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.753584 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.756842 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" event={"ID":"b1e65888-5032-411e-8910-5438e0aff32f","Type":"ContainerDied","Data":"e4e440f949a16c7c92a1572ceb6020eb2c0abbdd347846f7e3ad225704016290"} Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.756887 4675 scope.go:117] "RemoveContainer" containerID="c6e71287ec7fd966046c5d90ff95c855b676a7ce9888a7f83191c7628a04df41" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.757136 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.765461 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.765960 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d69f445c7-kqzw8" event={"ID":"c4464d27-9360-4f78-92cd-3b9d11204ec2","Type":"ContainerDied","Data":"6300eddfd5812e2ef5a13cb0e83a7dac291f0af984180d712cb6ee55436346f3"} Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.776392 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.776787 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4hsxg" event={"ID":"871f5758-f078-4271-acb9-e5ca8bfdc2eb","Type":"ContainerDied","Data":"f84662ad3ec2c456c286e916ae9aa92e43378ed477168ce624ec721241a6bc5e"} Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.776853 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f84662ad3ec2c456c286e916ae9aa92e43378ed477168ce624ec721241a6bc5e" Jan 24 07:12:32 crc kubenswrapper[4675]: E0124 07:12:32.781267 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-g8f6m" podUID="57270c73-9e5a-4629-8c7a-85123438a067" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.853366 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56ff9c89dc-jttpz"] Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.873402 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56ff9c89dc-jttpz"] Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.977459 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb93eadf-9c52-436f-8dcc-16a7ad976254" path="/var/lib/kubelet/pods/cb93eadf-9c52-436f-8dcc-16a7ad976254/volumes" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.977894 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66b6dd9b6f-mms9h"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.022664 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66b6dd9b6f-mms9h"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.042710 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mtp78"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.052298 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mtp78"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.073903 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d69f445c7-kqzw8"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.087078 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-d69f445c7-kqzw8"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.614152 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-c2j5t"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.734264 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-zrk79"] Jan 24 07:12:33 crc kubenswrapper[4675]: E0124 07:12:33.735067 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="init" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.735087 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="init" Jan 24 07:12:33 crc kubenswrapper[4675]: E0124 07:12:33.735098 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871f5758-f078-4271-acb9-e5ca8bfdc2eb" containerName="neutron-db-sync" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.735105 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="871f5758-f078-4271-acb9-e5ca8bfdc2eb" containerName="neutron-db-sync" Jan 24 07:12:33 crc kubenswrapper[4675]: E0124 07:12:33.735123 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.735129 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.735365 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.735537 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="871f5758-f078-4271-acb9-e5ca8bfdc2eb" containerName="neutron-db-sync" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.736511 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.792827 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-zrk79"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.799652 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.799779 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjw4l\" (UniqueName: \"kubernetes.io/projected/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-kube-api-access-xjw4l\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.799825 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.799856 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-config\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.799875 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-svc\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.799904 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.895822 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67cddfd9dd-rbhzj"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.898122 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.901040 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.901098 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-config\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.901118 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-svc\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.901162 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.901227 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.901281 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjw4l\" (UniqueName: \"kubernetes.io/projected/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-kube-api-access-xjw4l\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.902249 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-config\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.902305 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.902852 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-svc\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.903073 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.903356 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.917436 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r2l2l" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.917705 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.917944 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.917947 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.929588 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67cddfd9dd-rbhzj"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.947822 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjw4l\" (UniqueName: \"kubernetes.io/projected/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-kube-api-access-xjw4l\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.006980 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-ovndb-tls-certs\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.007376 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-config\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.007468 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw44f\" (UniqueName: \"kubernetes.io/projected/d7b4aa87-c092-4624-bd65-c9393dd36098-kube-api-access-hw44f\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.007620 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-combined-ca-bundle\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.007713 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-httpd-config\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.064097 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.109628 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-httpd-config\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.109701 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-ovndb-tls-certs\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.109767 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-config\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.109799 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw44f\" (UniqueName: \"kubernetes.io/projected/d7b4aa87-c092-4624-bd65-c9393dd36098-kube-api-access-hw44f\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.109869 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-combined-ca-bundle\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.123886 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-httpd-config\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.127529 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-config\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.139417 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-ovndb-tls-certs\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.139490 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-combined-ca-bundle\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.150987 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw44f\" (UniqueName: \"kubernetes.io/projected/d7b4aa87-c092-4624-bd65-c9393dd36098-kube-api-access-hw44f\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.268875 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: E0124 07:12:34.804239 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 24 07:12:34 crc kubenswrapper[4675]: E0124 07:12:34.804799 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-psx25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-58bxq_openstack(0d590a0d-6c41-407a-8e89-3e7b9a64a3f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:12:34 crc kubenswrapper[4675]: E0124 07:12:34.806057 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-58bxq" podUID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" Jan 24 07:12:34 crc kubenswrapper[4675]: E0124 07:12:34.859877 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-58bxq" podUID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.961658 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1e65888-5032-411e-8910-5438e0aff32f" path="/var/lib/kubelet/pods/b1e65888-5032-411e-8910-5438e0aff32f/volumes" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.962441 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4464d27-9360-4f78-92cd-3b9d11204ec2" path="/var/lib/kubelet/pods/c4464d27-9360-4f78-92cd-3b9d11204ec2/volumes" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.962881 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a6babb-0cb5-4967-9e60-749d73be754b" path="/var/lib/kubelet/pods/f7a6babb-0cb5-4967-9e60-749d73be754b/volumes" Jan 24 07:12:35 crc kubenswrapper[4675]: I0124 07:12:35.099042 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6565db7666-dt2lk"] Jan 24 07:12:35 crc kubenswrapper[4675]: I0124 07:12:35.399628 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-656ff794dd-jx8ld"] Jan 24 07:12:35 crc kubenswrapper[4675]: I0124 07:12:35.525913 4675 scope.go:117] "RemoveContainer" containerID="ae8e22c487bc5bca69369f08e9cf6514b43a32b610d114fdbb4d48fac338177d" Jan 24 07:12:35 crc kubenswrapper[4675]: W0124 07:12:35.579289 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b7e7730_0a42_48b0_bb7e_da95eb915126.slice/crio-8bd1c4beba04d5fcc9572ea915bae361afe798b8dbb1c73401d9df44778367a4 WatchSource:0}: Error finding container 8bd1c4beba04d5fcc9572ea915bae361afe798b8dbb1c73401d9df44778367a4: Status 404 returned error can't find the container with id 8bd1c4beba04d5fcc9572ea915bae361afe798b8dbb1c73401d9df44778367a4 Jan 24 07:12:35 crc kubenswrapper[4675]: I0124 07:12:35.644147 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 24 07:12:35 crc kubenswrapper[4675]: I0124 07:12:35.873528 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-656ff794dd-jx8ld" event={"ID":"4b7e7730-0a42-48b0-bb7e-da95eb915126","Type":"ContainerStarted","Data":"8bd1c4beba04d5fcc9572ea915bae361afe798b8dbb1c73401d9df44778367a4"} Jan 24 07:12:35 crc kubenswrapper[4675]: I0124 07:12:35.876146 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6565db7666-dt2lk" event={"ID":"6462a086-070a-4998-8a59-cb4ccbf19867","Type":"ContainerStarted","Data":"d950b238b60f1812543dfb4f7f5294f5560f40c993673b23b13c0d2609edbe30"} Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.193304 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-c2j5t"] Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.514601 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v7kb4"] Jan 24 07:12:36 crc kubenswrapper[4675]: W0124 07:12:36.554765 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5da3fd8e_4f1c_4a68_ae8d_ab0b06193e01.slice/crio-a024edd2c89d4dbb0880d1d277e7e0ec183583ef14d8222b2e5089f06997de0d WatchSource:0}: Error finding container a024edd2c89d4dbb0880d1d277e7e0ec183583ef14d8222b2e5089f06997de0d: Status 404 returned error can't find the container with id a024edd2c89d4dbb0880d1d277e7e0ec183583ef14d8222b2e5089f06997de0d Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.615555 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.668705 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5cfd8b5875-msfrk"] Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.673483 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.685564 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.685871 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.729564 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cfd8b5875-msfrk"] Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.789861 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-httpd-config\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.789950 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z8wj\" (UniqueName: \"kubernetes.io/projected/26894b11-10b1-4fa5-bd28-2fb4022c467b-kube-api-access-8z8wj\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.789967 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-combined-ca-bundle\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.789997 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-public-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.790048 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-config\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.790064 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-internal-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.790099 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-ovndb-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891697 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-internal-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891791 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-ovndb-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891829 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-httpd-config\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891878 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z8wj\" (UniqueName: \"kubernetes.io/projected/26894b11-10b1-4fa5-bd28-2fb4022c467b-kube-api-access-8z8wj\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891895 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-combined-ca-bundle\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891922 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-public-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891971 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-config\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.913152 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-ovndb-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.918319 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-zrk79"] Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.931344 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-combined-ca-bundle\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.931885 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-internal-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.939531 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-config\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.948267 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-httpd-config\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.953316 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-public-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.979397 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z8wj\" (UniqueName: \"kubernetes.io/projected/26894b11-10b1-4fa5-bd28-2fb4022c467b-kube-api-access-8z8wj\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.028542 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v7kb4" event={"ID":"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01","Type":"ContainerStarted","Data":"a024edd2c89d4dbb0880d1d277e7e0ec183583ef14d8222b2e5089f06997de0d"} Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.028576 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" event={"ID":"b4b87366-fdf2-4654-aab4-efa74076b162","Type":"ContainerStarted","Data":"ba3cb824f7658dc7273442d0f03b26f2e1b17aa2660c55aee97db099bc8849ca"} Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.028586 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fp9qw" event={"ID":"f54df341-915c-4505-bd2e-81923b07a2be","Type":"ContainerStarted","Data":"d2cd62045ebf2fa7b15faa8a57eb1e83b1434d06978bf2c230d6fd80499404d5"} Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.034359 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.040092 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerStarted","Data":"0ff4885d5dbc856385bb82616203fe2d9ca31f546f0610abde226a41b839fc48"} Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.055256 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-fp9qw" podStartSLOduration=7.345451675 podStartE2EDuration="43.055237543s" podCreationTimestamp="2026-01-24 07:11:54 +0000 UTC" firstStartedPulling="2026-01-24 07:11:56.487457554 +0000 UTC m=+1117.783562767" lastFinishedPulling="2026-01-24 07:12:32.197243402 +0000 UTC m=+1153.493348635" observedRunningTime="2026-01-24 07:12:37.040316613 +0000 UTC m=+1158.336421836" watchObservedRunningTime="2026-01-24 07:12:37.055237543 +0000 UTC m=+1158.351342756" Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.119979 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.171973 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67cddfd9dd-rbhzj"] Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.941784 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.067033 4675 generic.go:334] "Generic (PLEG): container finished" podID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerID="a357ef50188a1acd7da313f1d5fc0be108c9ca15168c4882b220cb0612f377a4" exitCode=0 Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.067339 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" event={"ID":"ce054cb0-d2ad-4960-9078-d977ce3ca9e6","Type":"ContainerDied","Data":"a357ef50188a1acd7da313f1d5fc0be108c9ca15168c4882b220cb0612f377a4"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.067494 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" event={"ID":"ce054cb0-d2ad-4960-9078-d977ce3ca9e6","Type":"ContainerStarted","Data":"b1d2359f9bd1730fd38d06c61fbb2923f790bf4bfe4ea9760488e068602ba6b1"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.078194 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4c064344-984b-40fd-9a3b-503d8e1531fd","Type":"ContainerStarted","Data":"7718fcd3b8f83feea63d09a395c7695283cd48482e3fdf877f211fe1be62a3b9"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.092855 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-656ff794dd-jx8ld" event={"ID":"4b7e7730-0a42-48b0-bb7e-da95eb915126","Type":"ContainerStarted","Data":"d623efa84153d9478067cecc083922766f30de0af338ee7da4123256d77162f1"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.110344 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6565db7666-dt2lk" event={"ID":"6462a086-070a-4998-8a59-cb4ccbf19867","Type":"ContainerStarted","Data":"32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.110407 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6565db7666-dt2lk" event={"ID":"6462a086-070a-4998-8a59-cb4ccbf19867","Type":"ContainerStarted","Data":"1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.151041 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cddfd9dd-rbhzj" event={"ID":"d7b4aa87-c092-4624-bd65-c9393dd36098","Type":"ContainerStarted","Data":"5ad36d71e3b73e26c65c39abbe42993d524b7ec51d9571439c232b730197cdc0"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.151087 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cddfd9dd-rbhzj" event={"ID":"d7b4aa87-c092-4624-bd65-c9393dd36098","Type":"ContainerStarted","Data":"9a865440f381a7417cf12468f043671da2b8b23ef036f738147c217bd9897103"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.157768 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v7kb4" event={"ID":"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01","Type":"ContainerStarted","Data":"eb86a86e2ea4aab1599d35163fef6b9016931250bfcb0fdf136d0350b3794d53"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.157959 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6565db7666-dt2lk" podStartSLOduration=34.553713585 podStartE2EDuration="35.157930588s" podCreationTimestamp="2026-01-24 07:12:03 +0000 UTC" firstStartedPulling="2026-01-24 07:12:35.642683813 +0000 UTC m=+1156.938789036" lastFinishedPulling="2026-01-24 07:12:36.246900816 +0000 UTC m=+1157.543006039" observedRunningTime="2026-01-24 07:12:38.151500704 +0000 UTC m=+1159.447605937" watchObservedRunningTime="2026-01-24 07:12:38.157930588 +0000 UTC m=+1159.454035821" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.169890 4675 generic.go:334] "Generic (PLEG): container finished" podID="b4b87366-fdf2-4654-aab4-efa74076b162" containerID="c61bf5984b471f88cfdd02231ae03b8b91f374729b2fde1b61566e741f61d5d3" exitCode=0 Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.169971 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" event={"ID":"b4b87366-fdf2-4654-aab4-efa74076b162","Type":"ContainerDied","Data":"c61bf5984b471f88cfdd02231ae03b8b91f374729b2fde1b61566e741f61d5d3"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.214633 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41c90fb1-ef62-4afe-bda9-4d6422af2ef1","Type":"ContainerStarted","Data":"583775ff60e4fb2db235511c186b845c09489d7ccd9368474576c53217f77ef8"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.243788 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-v7kb4" podStartSLOduration=20.243767488 podStartE2EDuration="20.243767488s" podCreationTimestamp="2026-01-24 07:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:38.201814666 +0000 UTC m=+1159.497919989" watchObservedRunningTime="2026-01-24 07:12:38.243767488 +0000 UTC m=+1159.539872711" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.637148 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.637508 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.637775 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.638426 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ccc264da54b5f1cadbac5cdeddfb0468de5e9dc08fb8953998ed833d79a9f49c"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.638483 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://ccc264da54b5f1cadbac5cdeddfb0468de5e9dc08fb8953998ed833d79a9f49c" gracePeriod=600 Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.808424 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.896475 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-swift-storage-0\") pod \"b4b87366-fdf2-4654-aab4-efa74076b162\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.896565 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-svc\") pod \"b4b87366-fdf2-4654-aab4-efa74076b162\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.896585 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-config\") pod \"b4b87366-fdf2-4654-aab4-efa74076b162\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.896741 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-nb\") pod \"b4b87366-fdf2-4654-aab4-efa74076b162\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.896776 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhlvr\" (UniqueName: \"kubernetes.io/projected/b4b87366-fdf2-4654-aab4-efa74076b162-kube-api-access-qhlvr\") pod \"b4b87366-fdf2-4654-aab4-efa74076b162\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.896795 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-sb\") pod \"b4b87366-fdf2-4654-aab4-efa74076b162\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.931663 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b87366-fdf2-4654-aab4-efa74076b162-kube-api-access-qhlvr" (OuterVolumeSpecName: "kube-api-access-qhlvr") pod "b4b87366-fdf2-4654-aab4-efa74076b162" (UID: "b4b87366-fdf2-4654-aab4-efa74076b162"). InnerVolumeSpecName "kube-api-access-qhlvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.949494 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b4b87366-fdf2-4654-aab4-efa74076b162" (UID: "b4b87366-fdf2-4654-aab4-efa74076b162"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.995198 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b4b87366-fdf2-4654-aab4-efa74076b162" (UID: "b4b87366-fdf2-4654-aab4-efa74076b162"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:38.999603 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:38.999645 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhlvr\" (UniqueName: \"kubernetes.io/projected/b4b87366-fdf2-4654-aab4-efa74076b162-kube-api-access-qhlvr\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:38.999657 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.006827 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cfd8b5875-msfrk"] Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.007198 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b4b87366-fdf2-4654-aab4-efa74076b162" (UID: "b4b87366-fdf2-4654-aab4-efa74076b162"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.008920 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-config" (OuterVolumeSpecName: "config") pod "b4b87366-fdf2-4654-aab4-efa74076b162" (UID: "b4b87366-fdf2-4654-aab4-efa74076b162"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.013379 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b4b87366-fdf2-4654-aab4-efa74076b162" (UID: "b4b87366-fdf2-4654-aab4-efa74076b162"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.103239 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.104161 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.104236 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.257441 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-656ff794dd-jx8ld" event={"ID":"4b7e7730-0a42-48b0-bb7e-da95eb915126","Type":"ContainerStarted","Data":"7f1a6675a950b42c9ecbccbf0a4fb33df3e31b81c67e7165b82b4f582a3574f1"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.283733 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cddfd9dd-rbhzj" event={"ID":"d7b4aa87-c092-4624-bd65-c9393dd36098","Type":"ContainerStarted","Data":"7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.284826 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.286211 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cfd8b5875-msfrk" event={"ID":"26894b11-10b1-4fa5-bd28-2fb4022c467b","Type":"ContainerStarted","Data":"5ce213dfa2f439c1f1ec2ddd0ebc6a5f1f2676cc28fbcde221469753153be07d"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.291483 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-656ff794dd-jx8ld" podStartSLOduration=35.16918923 podStartE2EDuration="36.291459198s" podCreationTimestamp="2026-01-24 07:12:03 +0000 UTC" firstStartedPulling="2026-01-24 07:12:35.693276853 +0000 UTC m=+1156.989382076" lastFinishedPulling="2026-01-24 07:12:36.815546831 +0000 UTC m=+1158.111652044" observedRunningTime="2026-01-24 07:12:39.279763566 +0000 UTC m=+1160.575868789" watchObservedRunningTime="2026-01-24 07:12:39.291459198 +0000 UTC m=+1160.587564421" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.299034 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" event={"ID":"b4b87366-fdf2-4654-aab4-efa74076b162","Type":"ContainerDied","Data":"ba3cb824f7658dc7273442d0f03b26f2e1b17aa2660c55aee97db099bc8849ca"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.299083 4675 scope.go:117] "RemoveContainer" containerID="c61bf5984b471f88cfdd02231ae03b8b91f374729b2fde1b61566e741f61d5d3" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.299202 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.306609 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41c90fb1-ef62-4afe-bda9-4d6422af2ef1","Type":"ContainerStarted","Data":"9ab5498eaf08210b1fdb6ae5aaaa769eff77ad7836cadafdee38c95e0e58d313"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.319393 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" event={"ID":"ce054cb0-d2ad-4960-9078-d977ce3ca9e6","Type":"ContainerStarted","Data":"13389cde8deb313942b430bed79fe311467f7a70f6bafc3afe90b1cc763d7d42"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.321287 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.332707 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67cddfd9dd-rbhzj" podStartSLOduration=6.332679762 podStartE2EDuration="6.332679762s" podCreationTimestamp="2026-01-24 07:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:39.325809716 +0000 UTC m=+1160.621914949" watchObservedRunningTime="2026-01-24 07:12:39.332679762 +0000 UTC m=+1160.628784985" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.341046 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="ccc264da54b5f1cadbac5cdeddfb0468de5e9dc08fb8953998ed833d79a9f49c" exitCode=0 Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.342583 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"ccc264da54b5f1cadbac5cdeddfb0468de5e9dc08fb8953998ed833d79a9f49c"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.362122 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" podStartSLOduration=6.362102432 podStartE2EDuration="6.362102432s" podCreationTimestamp="2026-01-24 07:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:39.355034591 +0000 UTC m=+1160.651139814" watchObservedRunningTime="2026-01-24 07:12:39.362102432 +0000 UTC m=+1160.658207655" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.519315 4675 scope.go:117] "RemoveContainer" containerID="9ae90be563283b996d1b10bf3ad8715e03978ae7930422faef174e860a3bf62d" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.519608 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-c2j5t"] Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.549017 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-c2j5t"] Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.453484 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cfd8b5875-msfrk" event={"ID":"26894b11-10b1-4fa5-bd28-2fb4022c467b","Type":"ContainerStarted","Data":"31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c"} Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.484068 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41c90fb1-ef62-4afe-bda9-4d6422af2ef1","Type":"ContainerStarted","Data":"1acce53197bbeb28823c1254d8c7d061aef9082a7171e8f9284a5439ef9f6401"} Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.484393 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-log" containerID="cri-o://9ab5498eaf08210b1fdb6ae5aaaa769eff77ad7836cadafdee38c95e0e58d313" gracePeriod=30 Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.484932 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-httpd" containerID="cri-o://1acce53197bbeb28823c1254d8c7d061aef9082a7171e8f9284a5439ef9f6401" gracePeriod=30 Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.516050 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=29.516030113 podStartE2EDuration="29.516030113s" podCreationTimestamp="2026-01-24 07:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:40.512177311 +0000 UTC m=+1161.808282534" watchObservedRunningTime="2026-01-24 07:12:40.516030113 +0000 UTC m=+1161.812135336" Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.530107 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4c064344-984b-40fd-9a3b-503d8e1531fd","Type":"ContainerStarted","Data":"4264ccdcf40ddea59d6e2218fa455a40cfc619138a80e2ca93fd5dfcf946c8e4"} Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.551309 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"c57b46ad673cdfd63921bb6948675e30fb84216d961ca2e82415fb89b85b5df0"} Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.966590 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b87366-fdf2-4654-aab4-efa74076b162" path="/var/lib/kubelet/pods/b4b87366-fdf2-4654-aab4-efa74076b162/volumes" Jan 24 07:12:41 crc kubenswrapper[4675]: I0124 07:12:41.561270 4675 generic.go:334] "Generic (PLEG): container finished" podID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerID="1acce53197bbeb28823c1254d8c7d061aef9082a7171e8f9284a5439ef9f6401" exitCode=143 Jan 24 07:12:41 crc kubenswrapper[4675]: I0124 07:12:41.561573 4675 generic.go:334] "Generic (PLEG): container finished" podID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerID="9ab5498eaf08210b1fdb6ae5aaaa769eff77ad7836cadafdee38c95e0e58d313" exitCode=143 Jan 24 07:12:41 crc kubenswrapper[4675]: I0124 07:12:41.561453 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41c90fb1-ef62-4afe-bda9-4d6422af2ef1","Type":"ContainerDied","Data":"1acce53197bbeb28823c1254d8c7d061aef9082a7171e8f9284a5439ef9f6401"} Jan 24 07:12:41 crc kubenswrapper[4675]: I0124 07:12:41.561623 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41c90fb1-ef62-4afe-bda9-4d6422af2ef1","Type":"ContainerDied","Data":"9ab5498eaf08210b1fdb6ae5aaaa769eff77ad7836cadafdee38c95e0e58d313"} Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.583568 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4c064344-984b-40fd-9a3b-503d8e1531fd","Type":"ContainerStarted","Data":"afe751d1b3859301cd7f99e68f36aac44df1b3cd6e8e0e7276f1aa70ded5f95d"} Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.583900 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-log" containerID="cri-o://4264ccdcf40ddea59d6e2218fa455a40cfc619138a80e2ca93fd5dfcf946c8e4" gracePeriod=30 Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.584158 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-httpd" containerID="cri-o://afe751d1b3859301cd7f99e68f36aac44df1b3cd6e8e0e7276f1aa70ded5f95d" gracePeriod=30 Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.616600 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=30.616582147 podStartE2EDuration="30.616582147s" podCreationTimestamp="2026-01-24 07:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:42.612220031 +0000 UTC m=+1163.908325254" watchObservedRunningTime="2026-01-24 07:12:42.616582147 +0000 UTC m=+1163.912687370" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.620313 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cfd8b5875-msfrk" event={"ID":"26894b11-10b1-4fa5-bd28-2fb4022c467b","Type":"ContainerStarted","Data":"e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429"} Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.620664 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.805141 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.825803 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5cfd8b5875-msfrk" podStartSLOduration=6.825783863 podStartE2EDuration="6.825783863s" podCreationTimestamp="2026-01-24 07:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:42.657048792 +0000 UTC m=+1163.953154015" watchObservedRunningTime="2026-01-24 07:12:42.825783863 +0000 UTC m=+1164.121889086" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.923978 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-scripts\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.924086 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5x7d\" (UniqueName: \"kubernetes.io/projected/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-kube-api-access-f5x7d\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.924140 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-config-data\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.924180 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-logs\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.924225 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-combined-ca-bundle\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.924242 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.924301 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-httpd-run\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.925098 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.925958 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-logs" (OuterVolumeSpecName: "logs") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.955398 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-scripts" (OuterVolumeSpecName: "scripts") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.955656 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-kube-api-access-f5x7d" (OuterVolumeSpecName: "kube-api-access-f5x7d") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "kube-api-access-f5x7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.987047 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.007146 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.070858 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.070895 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.070909 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.070918 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.070927 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5x7d\" (UniqueName: \"kubernetes.io/projected/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-kube-api-access-f5x7d\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.070937 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.099016 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.102068 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-config-data" (OuterVolumeSpecName: "config-data") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.172709 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.172769 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.379304 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.379640 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.677875 4675 generic.go:334] "Generic (PLEG): container finished" podID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerID="afe751d1b3859301cd7f99e68f36aac44df1b3cd6e8e0e7276f1aa70ded5f95d" exitCode=143 Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.677915 4675 generic.go:334] "Generic (PLEG): container finished" podID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerID="4264ccdcf40ddea59d6e2218fa455a40cfc619138a80e2ca93fd5dfcf946c8e4" exitCode=143 Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.677980 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4c064344-984b-40fd-9a3b-503d8e1531fd","Type":"ContainerDied","Data":"afe751d1b3859301cd7f99e68f36aac44df1b3cd6e8e0e7276f1aa70ded5f95d"} Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.678011 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4c064344-984b-40fd-9a3b-503d8e1531fd","Type":"ContainerDied","Data":"4264ccdcf40ddea59d6e2218fa455a40cfc619138a80e2ca93fd5dfcf946c8e4"} Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.694019 4675 generic.go:334] "Generic (PLEG): container finished" podID="f54df341-915c-4505-bd2e-81923b07a2be" containerID="d2cd62045ebf2fa7b15faa8a57eb1e83b1434d06978bf2c230d6fd80499404d5" exitCode=0 Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.694122 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fp9qw" event={"ID":"f54df341-915c-4505-bd2e-81923b07a2be","Type":"ContainerDied","Data":"d2cd62045ebf2fa7b15faa8a57eb1e83b1434d06978bf2c230d6fd80499404d5"} Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.723208 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41c90fb1-ef62-4afe-bda9-4d6422af2ef1","Type":"ContainerDied","Data":"583775ff60e4fb2db235511c186b845c09489d7ccd9368474576c53217f77ef8"} Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.723275 4675 scope.go:117] "RemoveContainer" containerID="1acce53197bbeb28823c1254d8c7d061aef9082a7171e8f9284a5439ef9f6401" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.723788 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.798791 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.820035 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.850894 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:43 crc kubenswrapper[4675]: E0124 07:12:43.859041 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b87366-fdf2-4654-aab4-efa74076b162" containerName="init" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.859076 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b87366-fdf2-4654-aab4-efa74076b162" containerName="init" Jan 24 07:12:43 crc kubenswrapper[4675]: E0124 07:12:43.859112 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-log" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.859119 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-log" Jan 24 07:12:43 crc kubenswrapper[4675]: E0124 07:12:43.859134 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-httpd" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.859141 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-httpd" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.859312 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b87366-fdf2-4654-aab4-efa74076b162" containerName="init" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.859341 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-log" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.859359 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-httpd" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.860223 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.864904 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.865640 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.869971 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985590 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r2w7\" (UniqueName: \"kubernetes.io/projected/5a50c14d-d518-492c-87d1-a194dc075c9f-kube-api-access-4r2w7\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985656 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985681 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985706 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-logs\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985766 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985811 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985901 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985940 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.066598 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093319 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093360 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093385 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-logs\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093410 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093435 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093524 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093558 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093603 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r2w7\" (UniqueName: \"kubernetes.io/projected/5a50c14d-d518-492c-87d1-a194dc075c9f-kube-api-access-4r2w7\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.100542 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.101135 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-logs\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.101325 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.101479 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.105263 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.110437 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.113944 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.150608 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.152275 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.159634 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r2w7\" (UniqueName: \"kubernetes.io/projected/5a50c14d-d518-492c-87d1-a194dc075c9f-kube-api-access-4r2w7\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.168814 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.171891 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-td45s"] Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.172143 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-td45s" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="dnsmasq-dns" containerID="cri-o://6ec4d7d14d6db0071695417a61ee609ea081b6e7e348c32a11719b766b0525e1" gracePeriod=10 Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.187105 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.310822 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.310867 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.734173 4675 generic.go:334] "Generic (PLEG): container finished" podID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerID="6ec4d7d14d6db0071695417a61ee609ea081b6e7e348c32a11719b766b0525e1" exitCode=0 Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.734230 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-td45s" event={"ID":"1e547740-a536-4d48-96a0-d22ca8bca63f","Type":"ContainerDied","Data":"6ec4d7d14d6db0071695417a61ee609ea081b6e7e348c32a11719b766b0525e1"} Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.741549 4675 generic.go:334] "Generic (PLEG): container finished" podID="5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" containerID="eb86a86e2ea4aab1599d35163fef6b9016931250bfcb0fdf136d0350b3794d53" exitCode=0 Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.741735 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v7kb4" event={"ID":"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01","Type":"ContainerDied","Data":"eb86a86e2ea4aab1599d35163fef6b9016931250bfcb0fdf136d0350b3794d53"} Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.956828 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" path="/var/lib/kubelet/pods/41c90fb1-ef62-4afe-bda9-4d6422af2ef1/volumes" Jan 24 07:12:45 crc kubenswrapper[4675]: I0124 07:12:45.356013 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-td45s" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.443538 4675 scope.go:117] "RemoveContainer" containerID="9ab5498eaf08210b1fdb6ae5aaaa769eff77ad7836cadafdee38c95e0e58d313" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.615552 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.638061 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fp9qw" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.805066 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v7kb4" event={"ID":"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01","Type":"ContainerDied","Data":"a024edd2c89d4dbb0880d1d277e7e0ec183583ef14d8222b2e5089f06997de0d"} Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.805106 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a024edd2c89d4dbb0880d1d277e7e0ec183583ef14d8222b2e5089f06997de0d" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.805168 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.811998 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8g5j\" (UniqueName: \"kubernetes.io/projected/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-kube-api-access-x8g5j\") pod \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812103 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-combined-ca-bundle\") pod \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812153 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-config-data\") pod \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812188 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-fernet-keys\") pod \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812203 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-credential-keys\") pod \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812341 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbjt6\" (UniqueName: \"kubernetes.io/projected/f54df341-915c-4505-bd2e-81923b07a2be-kube-api-access-jbjt6\") pod \"f54df341-915c-4505-bd2e-81923b07a2be\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812457 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-combined-ca-bundle\") pod \"f54df341-915c-4505-bd2e-81923b07a2be\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812476 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-config-data\") pod \"f54df341-915c-4505-bd2e-81923b07a2be\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812506 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-scripts\") pod \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812556 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-scripts\") pod \"f54df341-915c-4505-bd2e-81923b07a2be\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812583 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54df341-915c-4505-bd2e-81923b07a2be-logs\") pod \"f54df341-915c-4505-bd2e-81923b07a2be\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.813683 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f54df341-915c-4505-bd2e-81923b07a2be-logs" (OuterVolumeSpecName: "logs") pod "f54df341-915c-4505-bd2e-81923b07a2be" (UID: "f54df341-915c-4505-bd2e-81923b07a2be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.835616 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fp9qw" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.835939 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fp9qw" event={"ID":"f54df341-915c-4505-bd2e-81923b07a2be","Type":"ContainerDied","Data":"112402427a5eb414fe7cfc4f30de89d1b0218f39fa69ddaa6dd77168312cb7ae"} Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.836243 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="112402427a5eb414fe7cfc4f30de89d1b0218f39fa69ddaa6dd77168312cb7ae" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.837504 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f54df341-915c-4505-bd2e-81923b07a2be-kube-api-access-jbjt6" (OuterVolumeSpecName: "kube-api-access-jbjt6") pod "f54df341-915c-4505-bd2e-81923b07a2be" (UID: "f54df341-915c-4505-bd2e-81923b07a2be"). InnerVolumeSpecName "kube-api-access-jbjt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.838619 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-scripts" (OuterVolumeSpecName: "scripts") pod "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" (UID: "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.839271 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" (UID: "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.841350 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-scripts" (OuterVolumeSpecName: "scripts") pod "f54df341-915c-4505-bd2e-81923b07a2be" (UID: "f54df341-915c-4505-bd2e-81923b07a2be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.850229 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" (UID: "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.865480 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-kube-api-access-x8g5j" (OuterVolumeSpecName: "kube-api-access-x8g5j") pod "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" (UID: "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01"). InnerVolumeSpecName "kube-api-access-x8g5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.883667 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f54df341-915c-4505-bd2e-81923b07a2be" (UID: "f54df341-915c-4505-bd2e-81923b07a2be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.883820 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-config-data" (OuterVolumeSpecName: "config-data") pod "f54df341-915c-4505-bd2e-81923b07a2be" (UID: "f54df341-915c-4505-bd2e-81923b07a2be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.894946 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-config-data" (OuterVolumeSpecName: "config-data") pod "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" (UID: "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915304 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54df341-915c-4505-bd2e-81923b07a2be-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915332 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8g5j\" (UniqueName: \"kubernetes.io/projected/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-kube-api-access-x8g5j\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915343 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915352 4675 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915365 4675 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915374 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbjt6\" (UniqueName: \"kubernetes.io/projected/f54df341-915c-4505-bd2e-81923b07a2be-kube-api-access-jbjt6\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915383 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915396 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915403 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915412 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.924683 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" (UID: "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.957712 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.016315 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-sb\") pod \"1e547740-a536-4d48-96a0-d22ca8bca63f\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.016402 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfxkp\" (UniqueName: \"kubernetes.io/projected/1e547740-a536-4d48-96a0-d22ca8bca63f-kube-api-access-jfxkp\") pod \"1e547740-a536-4d48-96a0-d22ca8bca63f\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.016431 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-svc\") pod \"1e547740-a536-4d48-96a0-d22ca8bca63f\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.016663 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-config\") pod \"1e547740-a536-4d48-96a0-d22ca8bca63f\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.016692 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-swift-storage-0\") pod \"1e547740-a536-4d48-96a0-d22ca8bca63f\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.016825 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-nb\") pod \"1e547740-a536-4d48-96a0-d22ca8bca63f\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.017499 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.059412 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e547740-a536-4d48-96a0-d22ca8bca63f-kube-api-access-jfxkp" (OuterVolumeSpecName: "kube-api-access-jfxkp") pod "1e547740-a536-4d48-96a0-d22ca8bca63f" (UID: "1e547740-a536-4d48-96a0-d22ca8bca63f"). InnerVolumeSpecName "kube-api-access-jfxkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.099560 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1e547740-a536-4d48-96a0-d22ca8bca63f" (UID: "1e547740-a536-4d48-96a0-d22ca8bca63f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.105647 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1e547740-a536-4d48-96a0-d22ca8bca63f" (UID: "1e547740-a536-4d48-96a0-d22ca8bca63f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.119823 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.119850 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfxkp\" (UniqueName: \"kubernetes.io/projected/1e547740-a536-4d48-96a0-d22ca8bca63f-kube-api-access-jfxkp\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.119860 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.148590 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1e547740-a536-4d48-96a0-d22ca8bca63f" (UID: "1e547740-a536-4d48-96a0-d22ca8bca63f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.165069 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1e547740-a536-4d48-96a0-d22ca8bca63f" (UID: "1e547740-a536-4d48-96a0-d22ca8bca63f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.166311 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-config" (OuterVolumeSpecName: "config") pod "1e547740-a536-4d48-96a0-d22ca8bca63f" (UID: "1e547740-a536-4d48-96a0-d22ca8bca63f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.223441 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.223476 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.223487 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: W0124 07:12:48.257604 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a50c14d_d518_492c_87d1_a194dc075c9f.slice/crio-3f56668cc86ccffe01283b05d57ef7e538fda6369f55106b029c24100a089c58 WatchSource:0}: Error finding container 3f56668cc86ccffe01283b05d57ef7e538fda6369f55106b029c24100a089c58: Status 404 returned error can't find the container with id 3f56668cc86ccffe01283b05d57ef7e538fda6369f55106b029c24100a089c58 Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.257640 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.268088 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324292 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-scripts\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324375 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-logs\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324470 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-config-data\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324618 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-combined-ca-bundle\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324662 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324818 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lxbn\" (UniqueName: \"kubernetes.io/projected/4c064344-984b-40fd-9a3b-503d8e1531fd-kube-api-access-5lxbn\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324870 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-httpd-run\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.325951 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.327528 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-logs" (OuterVolumeSpecName: "logs") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.337379 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-scripts" (OuterVolumeSpecName: "scripts") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.338154 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.341860 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c064344-984b-40fd-9a3b-503d8e1531fd-kube-api-access-5lxbn" (OuterVolumeSpecName: "kube-api-access-5lxbn") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "kube-api-access-5lxbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.380832 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.383002 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-config-data" (OuterVolumeSpecName: "config-data") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.427870 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.428833 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.428851 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lxbn\" (UniqueName: \"kubernetes.io/projected/4c064344-984b-40fd-9a3b-503d8e1531fd-kube-api-access-5lxbn\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.428866 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.428877 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.428889 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.428901 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.448205 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.530761 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.856837 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5dbffd67c8-k8gzb"] Jan 24 07:12:48 crc kubenswrapper[4675]: E0124 07:12:48.857289 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-httpd" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857305 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-httpd" Jan 24 07:12:48 crc kubenswrapper[4675]: E0124 07:12:48.857351 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="dnsmasq-dns" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857359 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="dnsmasq-dns" Jan 24 07:12:48 crc kubenswrapper[4675]: E0124 07:12:48.857372 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-log" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857378 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-log" Jan 24 07:12:48 crc kubenswrapper[4675]: E0124 07:12:48.857396 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54df341-915c-4505-bd2e-81923b07a2be" containerName="placement-db-sync" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857404 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54df341-915c-4505-bd2e-81923b07a2be" containerName="placement-db-sync" Jan 24 07:12:48 crc kubenswrapper[4675]: E0124 07:12:48.857417 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" containerName="keystone-bootstrap" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857425 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" containerName="keystone-bootstrap" Jan 24 07:12:48 crc kubenswrapper[4675]: E0124 07:12:48.857436 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="init" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857443 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="init" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857612 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54df341-915c-4505-bd2e-81923b07a2be" containerName="placement-db-sync" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857630 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-httpd" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857665 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-log" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857683 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="dnsmasq-dns" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857694 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" containerName="keystone-bootstrap" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.858302 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.867779 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.867979 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.868095 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ddgj4" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.868532 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.868638 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.868760 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.885637 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dbffd67c8-k8gzb"] Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.922832 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5c48f89996-b4jz4"] Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.926355 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.928101 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-td45s" event={"ID":"1e547740-a536-4d48-96a0-d22ca8bca63f","Type":"ContainerDied","Data":"9da5085f19a402a7076d2b62d3720d8bab0822f44dc0a613a31fb3c57b813329"} Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.928168 4675 scope.go:117] "RemoveContainer" containerID="6ec4d7d14d6db0071695417a61ee609ea081b6e7e348c32a11719b766b0525e1" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.928349 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.931388 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tvkgt" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.933674 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.933901 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.945097 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.945377 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.953710 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-combined-ca-bundle\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.953913 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-fernet-keys\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.954205 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-config-data\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.954406 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-scripts\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.954630 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29fj5\" (UniqueName: \"kubernetes.io/projected/405f0f26-61a4-4420-a147-43d7b86ebb8e-kube-api-access-29fj5\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.954870 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-credential-keys\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.955082 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-internal-tls-certs\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.955592 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-public-tls-certs\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.993870 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerStarted","Data":"9de1cb80f6e48e728da957f71f2dc3c5adb5e1352f3a0c6647494ce1109b92eb"} Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.032144 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c48f89996-b4jz4"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.056887 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf1f40fb-34b7-494b-bed1-b851a073ac8c-logs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.056969 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-combined-ca-bundle\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057025 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-scripts\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057085 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29fj5\" (UniqueName: \"kubernetes.io/projected/405f0f26-61a4-4420-a147-43d7b86ebb8e-kube-api-access-29fj5\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057105 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-scripts\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057144 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-credential-keys\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057159 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-internal-tls-certs\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057186 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-public-tls-certs\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057214 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-config-data\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057234 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-public-tls-certs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057270 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-combined-ca-bundle\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057296 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-internal-tls-certs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057377 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-fernet-keys\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057406 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-config-data\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057425 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnb4w\" (UniqueName: \"kubernetes.io/projected/bf1f40fb-34b7-494b-bed1-b851a073ac8c-kube-api-access-dnb4w\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.063113 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4c064344-984b-40fd-9a3b-503d8e1531fd","Type":"ContainerDied","Data":"7718fcd3b8f83feea63d09a395c7695283cd48482e3fdf877f211fe1be62a3b9"} Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.063238 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.090035 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-public-tls-certs\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.091059 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-internal-tls-certs\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.091365 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-scripts\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.093337 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29fj5\" (UniqueName: \"kubernetes.io/projected/405f0f26-61a4-4420-a147-43d7b86ebb8e-kube-api-access-29fj5\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.101483 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-credential-keys\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.105077 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g8f6m" event={"ID":"57270c73-9e5a-4629-8c7a-85123438a067","Type":"ContainerStarted","Data":"5fd1de2ade476875bcd3cadbec86fb8450feb391c53de19fcd301aa7061837a8"} Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.115517 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-config-data\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.120702 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-fernet-keys\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.121356 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a50c14d-d518-492c-87d1-a194dc075c9f","Type":"ContainerStarted","Data":"3f56668cc86ccffe01283b05d57ef7e538fda6369f55106b029c24100a089c58"} Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.127917 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-td45s"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.153270 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-td45s"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.157373 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-combined-ca-bundle\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.159301 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnb4w\" (UniqueName: \"kubernetes.io/projected/bf1f40fb-34b7-494b-bed1-b851a073ac8c-kube-api-access-dnb4w\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.159490 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf1f40fb-34b7-494b-bed1-b851a073ac8c-logs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.159621 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-combined-ca-bundle\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.159785 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-scripts\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.159938 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-config-data\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.160027 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-public-tls-certs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.160124 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-internal-tls-certs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.161166 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf1f40fb-34b7-494b-bed1-b851a073ac8c-logs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.161303 4675 scope.go:117] "RemoveContainer" containerID="24c458e8c623625a4811d5039f766381aacba2ba8b89fd1c0b0f9eef580b418e" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.164641 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-combined-ca-bundle\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.174100 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-g8f6m" podStartSLOduration=2.5760479099999998 podStartE2EDuration="54.174070398s" podCreationTimestamp="2026-01-24 07:11:55 +0000 UTC" firstStartedPulling="2026-01-24 07:11:56.631890198 +0000 UTC m=+1117.927995421" lastFinishedPulling="2026-01-24 07:12:48.229912676 +0000 UTC m=+1169.526017909" observedRunningTime="2026-01-24 07:12:49.142578198 +0000 UTC m=+1170.438683431" watchObservedRunningTime="2026-01-24 07:12:49.174070398 +0000 UTC m=+1170.470175621" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.174540 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-config-data\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.185860 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-public-tls-certs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.193142 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-internal-tls-certs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.198456 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-scripts\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.210042 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.215372 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnb4w\" (UniqueName: \"kubernetes.io/projected/bf1f40fb-34b7-494b-bed1-b851a073ac8c-kube-api-access-dnb4w\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.219926 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.234930 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.246363 4675 scope.go:117] "RemoveContainer" containerID="afe751d1b3859301cd7f99e68f36aac44df1b3cd6e8e0e7276f1aa70ded5f95d" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.261168 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.277637 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.279381 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.286959 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.287640 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.292235 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.325598 4675 scope.go:117] "RemoveContainer" containerID="4264ccdcf40ddea59d6e2218fa455a40cfc619138a80e2ca93fd5dfcf946c8e4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374311 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2cht\" (UniqueName: \"kubernetes.io/projected/95652bba-0800-475e-9f2f-20e64195d523-kube-api-access-n2cht\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374402 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-logs\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374447 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374473 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374538 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374561 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374596 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374618 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477020 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477300 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2cht\" (UniqueName: \"kubernetes.io/projected/95652bba-0800-475e-9f2f-20e64195d523-kube-api-access-n2cht\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477355 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-logs\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477379 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477410 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477464 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477488 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477507 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.482097 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.506789 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-logs\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.507382 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.509996 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.514758 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.528825 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.584578 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2cht\" (UniqueName: \"kubernetes.io/projected/95652bba-0800-475e-9f2f-20e64195d523-kube-api-access-n2cht\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.633826 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.634954 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.931162 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:50 crc kubenswrapper[4675]: I0124 07:12:50.120708 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c48f89996-b4jz4"] Jan 24 07:12:50 crc kubenswrapper[4675]: I0124 07:12:50.267027 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a50c14d-d518-492c-87d1-a194dc075c9f","Type":"ContainerStarted","Data":"5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19"} Jan 24 07:12:50 crc kubenswrapper[4675]: I0124 07:12:50.352136 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dbffd67c8-k8gzb"] Jan 24 07:12:50 crc kubenswrapper[4675]: I0124 07:12:50.755662 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:50 crc kubenswrapper[4675]: I0124 07:12:50.983871 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" path="/var/lib/kubelet/pods/1e547740-a536-4d48-96a0-d22ca8bca63f/volumes" Jan 24 07:12:50 crc kubenswrapper[4675]: I0124 07:12:50.984638 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" path="/var/lib/kubelet/pods/4c064344-984b-40fd-9a3b-503d8e1531fd/volumes" Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.295501 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c48f89996-b4jz4" event={"ID":"bf1f40fb-34b7-494b-bed1-b851a073ac8c","Type":"ContainerStarted","Data":"340a60ba91a5c112feda44c5853209a5d92c4929885af3b1133ffcc6a62e4d2c"} Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.295764 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c48f89996-b4jz4" event={"ID":"bf1f40fb-34b7-494b-bed1-b851a073ac8c","Type":"ContainerStarted","Data":"e7fa95b4125491815a23fb9aa6ecd0371264f88d9e12286196245a7450a7d2ec"} Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.301357 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dbffd67c8-k8gzb" event={"ID":"405f0f26-61a4-4420-a147-43d7b86ebb8e","Type":"ContainerStarted","Data":"735478c8cb0c4c3ea93098a1dbe94f07614dae1ef0bd881871ef6f4f3ad0ebef"} Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.301409 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dbffd67c8-k8gzb" event={"ID":"405f0f26-61a4-4420-a147-43d7b86ebb8e","Type":"ContainerStarted","Data":"1727be14daac0838f03226de3925fb536dac83daaff543cd783cd74c07aa0e25"} Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.301437 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.304795 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95652bba-0800-475e-9f2f-20e64195d523","Type":"ContainerStarted","Data":"e4fd29804bf1cbcfbb72dce66fcc9bef4e155c732e7dbfc8e6239baf96486755"} Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.325322 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5dbffd67c8-k8gzb" podStartSLOduration=3.325306843 podStartE2EDuration="3.325306843s" podCreationTimestamp="2026-01-24 07:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:51.318549011 +0000 UTC m=+1172.614654234" watchObservedRunningTime="2026-01-24 07:12:51.325306843 +0000 UTC m=+1172.621412066" Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.337595 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95652bba-0800-475e-9f2f-20e64195d523","Type":"ContainerStarted","Data":"a03aacf34897e1379e2ae1229f088c65a86ab22a278aecd0a4ccc4cba6bdd994"} Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.361318 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c48f89996-b4jz4" event={"ID":"bf1f40fb-34b7-494b-bed1-b851a073ac8c","Type":"ContainerStarted","Data":"b8a8290d1b7a8bed3c58145348a0cb96a27c48d50a18f8eb1c5bb69798e76601"} Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.361607 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.371631 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-58bxq" event={"ID":"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7","Type":"ContainerStarted","Data":"ec7473e1089d8da929e61e3782b155d95dfe82c94964d44704255a4214eea76c"} Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.375921 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a50c14d-d518-492c-87d1-a194dc075c9f","Type":"ContainerStarted","Data":"82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b"} Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.387968 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5c48f89996-b4jz4" podStartSLOduration=4.387952333 podStartE2EDuration="4.387952333s" podCreationTimestamp="2026-01-24 07:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:52.381216162 +0000 UTC m=+1173.677321385" watchObservedRunningTime="2026-01-24 07:12:52.387952333 +0000 UTC m=+1173.684057556" Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.463333 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-58bxq" podStartSLOduration=5.501753306 podStartE2EDuration="58.463317941s" podCreationTimestamp="2026-01-24 07:11:54 +0000 UTC" firstStartedPulling="2026-01-24 07:11:56.487429463 +0000 UTC m=+1117.783534686" lastFinishedPulling="2026-01-24 07:12:49.448994098 +0000 UTC m=+1170.745099321" observedRunningTime="2026-01-24 07:12:52.427631331 +0000 UTC m=+1173.723736554" watchObservedRunningTime="2026-01-24 07:12:52.463317941 +0000 UTC m=+1173.759423164" Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.465499 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.465492824 podStartE2EDuration="9.465492824s" podCreationTimestamp="2026-01-24 07:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:52.456183389 +0000 UTC m=+1173.752288612" watchObservedRunningTime="2026-01-24 07:12:52.465492824 +0000 UTC m=+1173.761598047" Jan 24 07:12:53 crc kubenswrapper[4675]: I0124 07:12:53.397622 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95652bba-0800-475e-9f2f-20e64195d523","Type":"ContainerStarted","Data":"92544b6eaa03a277318c5550337cdf4977e7b316dcc14eeae1de3a44d092ab8e"} Jan 24 07:12:53 crc kubenswrapper[4675]: I0124 07:12:53.398569 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:53 crc kubenswrapper[4675]: I0124 07:12:53.421946 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.421922262 podStartE2EDuration="4.421922262s" podCreationTimestamp="2026-01-24 07:12:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:53.419631917 +0000 UTC m=+1174.715737140" watchObservedRunningTime="2026-01-24 07:12:53.421922262 +0000 UTC m=+1174.718027485" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.151202 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.188116 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.188177 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.251842 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.283515 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.313895 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-656ff794dd-jx8ld" podUID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.414005 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.414899 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 07:12:55 crc kubenswrapper[4675]: I0124 07:12:55.422268 4675 generic.go:334] "Generic (PLEG): container finished" podID="57270c73-9e5a-4629-8c7a-85123438a067" containerID="5fd1de2ade476875bcd3cadbec86fb8450feb391c53de19fcd301aa7061837a8" exitCode=0 Jan 24 07:12:55 crc kubenswrapper[4675]: I0124 07:12:55.422458 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g8f6m" event={"ID":"57270c73-9e5a-4629-8c7a-85123438a067","Type":"ContainerDied","Data":"5fd1de2ade476875bcd3cadbec86fb8450feb391c53de19fcd301aa7061837a8"} Jan 24 07:12:57 crc kubenswrapper[4675]: I0124 07:12:57.975264 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 07:12:58 crc kubenswrapper[4675]: I0124 07:12:58.447509 4675 generic.go:334] "Generic (PLEG): container finished" podID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" containerID="ec7473e1089d8da929e61e3782b155d95dfe82c94964d44704255a4214eea76c" exitCode=0 Jan 24 07:12:58 crc kubenswrapper[4675]: I0124 07:12:58.447549 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-58bxq" event={"ID":"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7","Type":"ContainerDied","Data":"ec7473e1089d8da929e61e3782b155d95dfe82c94964d44704255a4214eea76c"} Jan 24 07:12:59 crc kubenswrapper[4675]: I0124 07:12:59.881052 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 07:12:59 crc kubenswrapper[4675]: I0124 07:12:59.948257 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:59 crc kubenswrapper[4675]: I0124 07:12:59.951109 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:00 crc kubenswrapper[4675]: I0124 07:13:00.017182 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:00 crc kubenswrapper[4675]: I0124 07:13:00.028257 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:00 crc kubenswrapper[4675]: I0124 07:13:00.470689 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:00 crc kubenswrapper[4675]: I0124 07:13:00.471157 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.009573 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.049878 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-58bxq" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.114917 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txvlz\" (UniqueName: \"kubernetes.io/projected/57270c73-9e5a-4629-8c7a-85123438a067-kube-api-access-txvlz\") pod \"57270c73-9e5a-4629-8c7a-85123438a067\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.114986 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-db-sync-config-data\") pod \"57270c73-9e5a-4629-8c7a-85123438a067\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.115043 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-combined-ca-bundle\") pod \"57270c73-9e5a-4629-8c7a-85123438a067\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.131115 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57270c73-9e5a-4629-8c7a-85123438a067-kube-api-access-txvlz" (OuterVolumeSpecName: "kube-api-access-txvlz") pod "57270c73-9e5a-4629-8c7a-85123438a067" (UID: "57270c73-9e5a-4629-8c7a-85123438a067"). InnerVolumeSpecName "kube-api-access-txvlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.138880 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "57270c73-9e5a-4629-8c7a-85123438a067" (UID: "57270c73-9e5a-4629-8c7a-85123438a067"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.187262 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57270c73-9e5a-4629-8c7a-85123438a067" (UID: "57270c73-9e5a-4629-8c7a-85123438a067"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216065 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-scripts\") pod \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216165 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psx25\" (UniqueName: \"kubernetes.io/projected/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-kube-api-access-psx25\") pod \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216205 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-db-sync-config-data\") pod \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216223 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-etc-machine-id\") pod \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216259 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-config-data\") pod \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216297 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-combined-ca-bundle\") pod \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216640 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" (UID: "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216893 4675 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216909 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txvlz\" (UniqueName: \"kubernetes.io/projected/57270c73-9e5a-4629-8c7a-85123438a067-kube-api-access-txvlz\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216919 4675 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216945 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.220911 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-kube-api-access-psx25" (OuterVolumeSpecName: "kube-api-access-psx25") pod "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" (UID: "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7"). InnerVolumeSpecName "kube-api-access-psx25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.223899 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" (UID: "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.243877 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-scripts" (OuterVolumeSpecName: "scripts") pod "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" (UID: "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.260836 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" (UID: "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.292939 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-config-data" (OuterVolumeSpecName: "config-data") pod "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" (UID: "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.318230 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.318269 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.318279 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.318288 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psx25\" (UniqueName: \"kubernetes.io/projected/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-kube-api-access-psx25\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.318297 4675 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.501888 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g8f6m" event={"ID":"57270c73-9e5a-4629-8c7a-85123438a067","Type":"ContainerDied","Data":"5b259eb76af8e66f76ee1bcfd7ccd3f155f31927bbacf08cb7666192371fbd27"} Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.501942 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b259eb76af8e66f76ee1bcfd7ccd3f155f31927bbacf08cb7666192371fbd27" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.501907 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.504166 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.504189 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.505298 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-58bxq" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.505298 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-58bxq" event={"ID":"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7","Type":"ContainerDied","Data":"c38917fea91ac9c2e54b191a74e3d4deadf5294f5614422ff9a1c2dc377a8acc"} Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.505416 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c38917fea91ac9c2e54b191a74e3d4deadf5294f5614422ff9a1c2dc377a8acc" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.871376 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.286666 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.393531 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:03 crc kubenswrapper[4675]: E0124 07:13:03.393946 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57270c73-9e5a-4629-8c7a-85123438a067" containerName="barbican-db-sync" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.393972 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="57270c73-9e5a-4629-8c7a-85123438a067" containerName="barbican-db-sync" Jan 24 07:13:03 crc kubenswrapper[4675]: E0124 07:13:03.393982 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" containerName="cinder-db-sync" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.393987 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" containerName="cinder-db-sync" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.394153 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" containerName="cinder-db-sync" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.394178 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="57270c73-9e5a-4629-8c7a-85123438a067" containerName="barbican-db-sync" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.395063 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.407405 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.407433 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gdfs9" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.407627 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.407760 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.439514 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.498839 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-9646bdbd7-ww6xm"] Jan 24 07:13:03 crc kubenswrapper[4675]: E0124 07:13:03.517798 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="62b7e06f-b840-408c-b026-a086b975812f" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.520186 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.524762 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.525029 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.525226 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9pmfh" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.531988 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerStarted","Data":"8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8"} Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.566650 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.566697 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.566738 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.566821 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46bd6c12-adaa-4fef-9ed2-4111468e21a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.566848 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.566910 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt4p5\" (UniqueName: \"kubernetes.io/projected/46bd6c12-adaa-4fef-9ed2-4111468e21a4-kube-api-access-mt4p5\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.603477 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9646bdbd7-ww6xm"] Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.651121 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-67c5df6588-xqvmq"] Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.664099 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.672796 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt4p5\" (UniqueName: \"kubernetes.io/projected/46bd6c12-adaa-4fef-9ed2-4111468e21a4-kube-api-access-mt4p5\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.672857 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.672877 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.672899 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.672977 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46bd6c12-adaa-4fef-9ed2-4111468e21a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.673002 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.681968 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.682412 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46bd6c12-adaa-4fef-9ed2-4111468e21a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.692086 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.693282 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.701453 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.721453 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.783691 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd6h8\" (UniqueName: \"kubernetes.io/projected/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-kube-api-access-xd6h8\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.783794 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-combined-ca-bundle\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.783886 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-config-data-custom\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.783905 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-config-data\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.783976 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-logs\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.825110 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-67c5df6588-xqvmq"] Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.890968 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-combined-ca-bundle\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.899947 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd6h8\" (UniqueName: \"kubernetes.io/projected/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-kube-api-access-xd6h8\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.900202 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-combined-ca-bundle\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.900416 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-config-data-custom\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.900585 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-config-data-custom\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.900691 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-config-data\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.900990 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-logs\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.901105 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c5d104c-9f26-49fd-bec5-f62a53503d42-logs\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.901245 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tbwp\" (UniqueName: \"kubernetes.io/projected/3c5d104c-9f26-49fd-bec5-f62a53503d42-kube-api-access-6tbwp\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.901398 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-config-data\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.907102 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-logs\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.915435 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-config-data-custom\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.919166 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-combined-ca-bundle\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.942880 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-774db89647-4t4lw"] Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.955863 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.970966 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-config-data\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.983437 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd6h8\" (UniqueName: \"kubernetes.io/projected/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-kube-api-access-xd6h8\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.003524 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-combined-ca-bundle\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.003596 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-config-data-custom\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.003692 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c5d104c-9f26-49fd-bec5-f62a53503d42-logs\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.003735 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tbwp\" (UniqueName: \"kubernetes.io/projected/3c5d104c-9f26-49fd-bec5-f62a53503d42-kube-api-access-6tbwp\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.003758 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-config-data\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.008486 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c5d104c-9f26-49fd-bec5-f62a53503d42-logs\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.016323 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-combined-ca-bundle\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.023097 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-config-data\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.062847 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-config-data-custom\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.063571 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-774db89647-4t4lw"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.078481 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tbwp\" (UniqueName: \"kubernetes.io/projected/3c5d104c-9f26-49fd-bec5-f62a53503d42-kube-api-access-6tbwp\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.125662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt4p5\" (UniqueName: \"kubernetes.io/projected/46bd6c12-adaa-4fef-9ed2-4111468e21a4-kube-api-access-mt4p5\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.147779 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.167225 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.213184 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-svc\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.213250 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-sb\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.213295 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-swift-storage-0\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.213326 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-nb\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.213420 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-config\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.213487 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhd8m\" (UniqueName: \"kubernetes.io/projected/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-kube-api-access-dhd8m\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.252965 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774db89647-4t4lw"] Jan 24 07:13:04 crc kubenswrapper[4675]: E0124 07:13:04.253571 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-dhd8m ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-774db89647-4t4lw" podUID="3871ad3e-a2e3-488f-8b8b-9db66e3af5de" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.317241 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-656ff794dd-jx8ld" podUID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.318753 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-config\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.318814 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhd8m\" (UniqueName: \"kubernetes.io/projected/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-kube-api-access-dhd8m\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.318864 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-svc\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.318883 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-sb\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.318910 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-swift-storage-0\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.318930 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-nb\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.330390 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.335121 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-svc\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.338561 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-sb\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.340919 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.344618 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-swift-storage-0\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.345050 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-config\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.345358 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-nb\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.363179 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.363658 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.386217 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.458897 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.474959 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhd8m\" (UniqueName: \"kubernetes.io/projected/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-kube-api-access-dhd8m\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.526740 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-scripts\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.526782 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c678b4d9-4b62-4225-b60b-06753bb72445-logs\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.526815 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.526875 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c678b4d9-4b62-4225-b60b-06753bb72445-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.526898 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.526967 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data-custom\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.527007 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-558lh\" (UniqueName: \"kubernetes.io/projected/c678b4d9-4b62-4225-b60b-06753bb72445-kube-api-access-558lh\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.547575 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="ceilometer-notification-agent" containerID="cri-o://0ff4885d5dbc856385bb82616203fe2d9ca31f546f0610abde226a41b839fc48" gracePeriod=30 Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.554928 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.555636 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.556027 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="proxy-httpd" containerID="cri-o://8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8" gracePeriod=30 Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.556735 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="sg-core" containerID="cri-o://9de1cb80f6e48e728da957f71f2dc3c5adb5e1352f3a0c6647494ce1109b92eb" gracePeriod=30 Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.574774 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6dff87ccf4-s6k69"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.576311 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.587162 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.592632 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.632164 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g9hmc"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.633556 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.643959 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data-custom\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.644085 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-558lh\" (UniqueName: \"kubernetes.io/projected/c678b4d9-4b62-4225-b60b-06753bb72445-kube-api-access-558lh\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.644110 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-scripts\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.644126 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c678b4d9-4b62-4225-b60b-06753bb72445-logs\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.644179 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.644290 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c678b4d9-4b62-4225-b60b-06753bb72445-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.644331 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.647790 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.652037 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c678b4d9-4b62-4225-b60b-06753bb72445-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.652284 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c678b4d9-4b62-4225-b60b-06753bb72445-logs\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.657935 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.680440 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-scripts\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.681991 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.687853 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data-custom\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.690314 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-558lh\" (UniqueName: \"kubernetes.io/projected/c678b4d9-4b62-4225-b60b-06753bb72445-kube-api-access-558lh\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.690808 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dff87ccf4-s6k69"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.711507 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.744371 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g9hmc"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749256 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-svc\") pod \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749368 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-sb\") pod \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749476 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-swift-storage-0\") pod \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749503 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-config\") pod \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749536 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhd8m\" (UniqueName: \"kubernetes.io/projected/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-kube-api-access-dhd8m\") pod \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749596 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-nb\") pod \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749842 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-logs\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749882 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-config\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749916 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749931 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-combined-ca-bundle\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749953 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.750026 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.750050 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.750113 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9mvd\" (UniqueName: \"kubernetes.io/projected/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-kube-api-access-g9mvd\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.750140 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data-custom\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.750159 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.750197 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsndj\" (UniqueName: \"kubernetes.io/projected/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-kube-api-access-hsndj\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.752382 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3871ad3e-a2e3-488f-8b8b-9db66e3af5de" (UID: "3871ad3e-a2e3-488f-8b8b-9db66e3af5de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.753545 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3871ad3e-a2e3-488f-8b8b-9db66e3af5de" (UID: "3871ad3e-a2e3-488f-8b8b-9db66e3af5de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.753992 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-config" (OuterVolumeSpecName: "config") pod "3871ad3e-a2e3-488f-8b8b-9db66e3af5de" (UID: "3871ad3e-a2e3-488f-8b8b-9db66e3af5de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.754479 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3871ad3e-a2e3-488f-8b8b-9db66e3af5de" (UID: "3871ad3e-a2e3-488f-8b8b-9db66e3af5de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.760912 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3871ad3e-a2e3-488f-8b8b-9db66e3af5de" (UID: "3871ad3e-a2e3-488f-8b8b-9db66e3af5de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.762647 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-kube-api-access-dhd8m" (OuterVolumeSpecName: "kube-api-access-dhd8m") pod "3871ad3e-a2e3-488f-8b8b-9db66e3af5de" (UID: "3871ad3e-a2e3-488f-8b8b-9db66e3af5de"). InnerVolumeSpecName "kube-api-access-dhd8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851214 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851305 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851332 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851358 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9mvd\" (UniqueName: \"kubernetes.io/projected/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-kube-api-access-g9mvd\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851377 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data-custom\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851397 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851423 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsndj\" (UniqueName: \"kubernetes.io/projected/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-kube-api-access-hsndj\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851461 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-logs\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851479 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-config\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851507 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851521 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-combined-ca-bundle\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851568 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851579 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851587 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851597 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851605 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851615 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhd8m\" (UniqueName: \"kubernetes.io/projected/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-kube-api-access-dhd8m\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.858450 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-logs\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.858560 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.859177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data-custom\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.859758 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.861323 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-config\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.862036 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-combined-ca-bundle\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.865167 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.865706 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.874296 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.900933 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9mvd\" (UniqueName: \"kubernetes.io/projected/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-kube-api-access-g9mvd\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.933274 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.975546 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsndj\" (UniqueName: \"kubernetes.io/projected/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-kube-api-access-hsndj\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.987555 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.002781 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9646bdbd7-ww6xm"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.169442 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cfd8b5875-msfrk"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.169941 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cfd8b5875-msfrk" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-api" containerID="cri-o://31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c" gracePeriod=30 Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.170766 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cfd8b5875-msfrk" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-httpd" containerID="cri-o://e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429" gracePeriod=30 Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.234726 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77c5f475df-4zndh"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.236129 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.254476 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77c5f475df-4zndh"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365726 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-combined-ca-bundle\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365826 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-ovndb-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365846 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-httpd-config\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365863 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-config\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365889 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-public-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365924 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-internal-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365955 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4fx9\" (UniqueName: \"kubernetes.io/projected/4dd8da22-c828-48e1-bbab-d7360beb8d9f-kube-api-access-b4fx9\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.453817 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468616 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4fx9\" (UniqueName: \"kubernetes.io/projected/4dd8da22-c828-48e1-bbab-d7360beb8d9f-kube-api-access-b4fx9\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468663 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-combined-ca-bundle\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468824 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-ovndb-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468845 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-httpd-config\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468861 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-config\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468889 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-public-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468932 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-internal-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.484812 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-ovndb-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.498535 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-httpd-config\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.501510 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-internal-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.501693 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-67c5df6588-xqvmq"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.502311 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-public-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.504649 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-config\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.515471 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-combined-ca-bundle\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.534779 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4fx9\" (UniqueName: \"kubernetes.io/projected/4dd8da22-c828-48e1-bbab-d7360beb8d9f-kube-api-access-b4fx9\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.582820 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.589424 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5cfd8b5875-msfrk" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": read tcp 10.217.0.2:35164->10.217.0.154:9696: read: connection reset by peer" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.613945 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"46bd6c12-adaa-4fef-9ed2-4111468e21a4","Type":"ContainerStarted","Data":"d654f868dd8eaa55d5aced57050491d4bfc469df89f624ef9e17f9340f8f21b9"} Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.640390 4675 generic.go:334] "Generic (PLEG): container finished" podID="62b7e06f-b840-408c-b026-a086b975812f" containerID="9de1cb80f6e48e728da957f71f2dc3c5adb5e1352f3a0c6647494ce1109b92eb" exitCode=2 Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.641735 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerDied","Data":"9de1cb80f6e48e728da957f71f2dc3c5adb5e1352f3a0c6647494ce1109b92eb"} Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.649542 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.650224 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9646bdbd7-ww6xm" event={"ID":"be4ebeb1-6268-4363-948f-8f9aa8f61fe9","Type":"ContainerStarted","Data":"ff7a8cc9d923e6a2affe78a00ab26b24d1af2607333da6ad302c7009f39e265a"} Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.662758 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.664253 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" event={"ID":"3c5d104c-9f26-49fd-bec5-f62a53503d42","Type":"ContainerStarted","Data":"5b772e29cb07f523054debab49f022b04588f8f30e3edb348f80f72b74c89242"} Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.767279 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774db89647-4t4lw"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.795088 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-774db89647-4t4lw"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.920828 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g9hmc"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.950882 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dff87ccf4-s6k69"] Jan 24 07:13:05 crc kubenswrapper[4675]: W0124 07:13:05.971078 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7d870cc_b2a9_442f_9779_bf9fbeb8ce2b.slice/crio-40d5e03d905e545c8ea9211fa30ab877f25abe89a569e93e4fe8d108c5f0d55a WatchSource:0}: Error finding container 40d5e03d905e545c8ea9211fa30ab877f25abe89a569e93e4fe8d108c5f0d55a: Status 404 returned error can't find the container with id 40d5e03d905e545c8ea9211fa30ab877f25abe89a569e93e4fe8d108c5f0d55a Jan 24 07:13:05 crc kubenswrapper[4675]: W0124 07:13:05.989120 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ac4ed7_04e2_420b_b6cd_4021c5cd1b9f.slice/crio-ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196 WatchSource:0}: Error finding container ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196: Status 404 returned error can't find the container with id ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196 Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.394266 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77c5f475df-4zndh"] Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.725482 4675 generic.go:334] "Generic (PLEG): container finished" podID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerID="e5be4111b174b0893302ec72491db10290b0324936ef7df715e08fe66a0569cc" exitCode=0 Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.726375 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" event={"ID":"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b","Type":"ContainerDied","Data":"e5be4111b174b0893302ec72491db10290b0324936ef7df715e08fe66a0569cc"} Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.726405 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" event={"ID":"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b","Type":"ContainerStarted","Data":"40d5e03d905e545c8ea9211fa30ab877f25abe89a569e93e4fe8d108c5f0d55a"} Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.777440 4675 generic.go:334] "Generic (PLEG): container finished" podID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerID="e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429" exitCode=0 Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.777686 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cfd8b5875-msfrk" event={"ID":"26894b11-10b1-4fa5-bd28-2fb4022c467b","Type":"ContainerDied","Data":"e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429"} Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.808963 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dff87ccf4-s6k69" event={"ID":"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f","Type":"ContainerStarted","Data":"e8bccace6fcb2244f7e0a94668ba27679d5c9bd93341c87183cda6405d20bba8"} Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.809004 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dff87ccf4-s6k69" event={"ID":"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f","Type":"ContainerStarted","Data":"ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196"} Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.881605 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c678b4d9-4b62-4225-b60b-06753bb72445","Type":"ContainerStarted","Data":"822c1c454fd0151d45480f18166086e5ee1d458af466ed3b7a6d7251dabea3aa"} Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.973781 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3871ad3e-a2e3-488f-8b8b-9db66e3af5de" path="/var/lib/kubelet/pods/3871ad3e-a2e3-488f-8b8b-9db66e3af5de/volumes" Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.052587 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5cfd8b5875-msfrk" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": dial tcp 10.217.0.154:9696: connect: connection refused" Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.488254 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.812152 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="009254f3-9d76-4d89-8e35-d2b4c4be0da8" containerName="galera" probeResult="failure" output="command timed out" Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.861458 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.995931 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-config\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.995989 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-httpd-config\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.996058 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-public-tls-certs\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.996091 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-combined-ca-bundle\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.996159 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-ovndb-tls-certs\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.996215 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-internal-tls-certs\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.996284 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z8wj\" (UniqueName: \"kubernetes.io/projected/26894b11-10b1-4fa5-bd28-2fb4022c467b-kube-api-access-8z8wj\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.999493 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dff87ccf4-s6k69" event={"ID":"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f","Type":"ContainerStarted","Data":"2ab48c35f8779185c47e215c2aabb98ee054565141a51e8dee3bcb4b8f15cbbb"} Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.999553 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.999575 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.014159 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.044923 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26894b11-10b1-4fa5-bd28-2fb4022c467b-kube-api-access-8z8wj" (OuterVolumeSpecName: "kube-api-access-8z8wj") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "kube-api-access-8z8wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.061451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c678b4d9-4b62-4225-b60b-06753bb72445","Type":"ContainerStarted","Data":"247e02fca5b905cd0d93b258db42bb0be8938ce0ebace5d52207500fcb2fd5de"} Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.063805 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6dff87ccf4-s6k69" podStartSLOduration=4.063794022 podStartE2EDuration="4.063794022s" podCreationTimestamp="2026-01-24 07:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:08.058242148 +0000 UTC m=+1189.354347371" watchObservedRunningTime="2026-01-24 07:13:08.063794022 +0000 UTC m=+1189.359899255" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.100161 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z8wj\" (UniqueName: \"kubernetes.io/projected/26894b11-10b1-4fa5-bd28-2fb4022c467b-kube-api-access-8z8wj\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.100198 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.112959 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c5f475df-4zndh" event={"ID":"4dd8da22-c828-48e1-bbab-d7360beb8d9f","Type":"ContainerStarted","Data":"84fef0fac78f24b3e6b8786f384aaf6f2cd1af7ab01ee22f772c1f94b033e487"} Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.113002 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c5f475df-4zndh" event={"ID":"4dd8da22-c828-48e1-bbab-d7360beb8d9f","Type":"ContainerStarted","Data":"6587db20d5f195ec6ce7bb19011525e5299b19984aa98cc842769ccd592191ef"} Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.124131 4675 generic.go:334] "Generic (PLEG): container finished" podID="62b7e06f-b840-408c-b026-a086b975812f" containerID="0ff4885d5dbc856385bb82616203fe2d9ca31f546f0610abde226a41b839fc48" exitCode=0 Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.125050 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerDied","Data":"0ff4885d5dbc856385bb82616203fe2d9ca31f546f0610abde226a41b839fc48"} Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.160483 4675 generic.go:334] "Generic (PLEG): container finished" podID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerID="31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c" exitCode=0 Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.160525 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cfd8b5875-msfrk" event={"ID":"26894b11-10b1-4fa5-bd28-2fb4022c467b","Type":"ContainerDied","Data":"31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c"} Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.160552 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cfd8b5875-msfrk" event={"ID":"26894b11-10b1-4fa5-bd28-2fb4022c467b","Type":"ContainerDied","Data":"5ce213dfa2f439c1f1ec2ddd0ebc6a5f1f2676cc28fbcde221469753153be07d"} Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.160567 4675 scope.go:117] "RemoveContainer" containerID="e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.160607 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.188602 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.201647 4675 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.211394 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-config" (OuterVolumeSpecName: "config") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.238362 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.272102 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.305870 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.305905 4675 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.305917 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.342535 4675 scope.go:117] "RemoveContainer" containerID="31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.348346 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.407174 4675 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.417004 4675 scope.go:117] "RemoveContainer" containerID="e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429" Jan 24 07:13:08 crc kubenswrapper[4675]: E0124 07:13:08.421511 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429\": container with ID starting with e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429 not found: ID does not exist" containerID="e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.421552 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429"} err="failed to get container status \"e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429\": rpc error: code = NotFound desc = could not find container \"e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429\": container with ID starting with e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429 not found: ID does not exist" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.421575 4675 scope.go:117] "RemoveContainer" containerID="31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c" Jan 24 07:13:08 crc kubenswrapper[4675]: E0124 07:13:08.424381 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c\": container with ID starting with 31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c not found: ID does not exist" containerID="31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.424418 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c"} err="failed to get container status \"31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c\": rpc error: code = NotFound desc = could not find container \"31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c\": container with ID starting with 31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c not found: ID does not exist" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.566513 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cfd8b5875-msfrk"] Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.583598 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5cfd8b5875-msfrk"] Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.973796 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" path="/var/lib/kubelet/pods/26894b11-10b1-4fa5-bd28-2fb4022c467b/volumes" Jan 24 07:13:09 crc kubenswrapper[4675]: I0124 07:13:09.175897 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"46bd6c12-adaa-4fef-9ed2-4111468e21a4","Type":"ContainerStarted","Data":"c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055"} Jan 24 07:13:09 crc kubenswrapper[4675]: I0124 07:13:09.180117 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" event={"ID":"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b","Type":"ContainerStarted","Data":"80eec3e9dcbbc1cd44130b29a91f156fcae83d34ac57cf18dfb9d0209ee3b6b5"} Jan 24 07:13:09 crc kubenswrapper[4675]: I0124 07:13:09.194976 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c5f475df-4zndh" event={"ID":"4dd8da22-c828-48e1-bbab-d7360beb8d9f","Type":"ContainerStarted","Data":"29aeb18c407661cfcbc9b30181a62734f146aaebc5b1ba39c7ab5883861c6c5b"} Jan 24 07:13:09 crc kubenswrapper[4675]: I0124 07:13:09.195218 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:09 crc kubenswrapper[4675]: I0124 07:13:09.205642 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" podStartSLOduration=5.205619832 podStartE2EDuration="5.205619832s" podCreationTimestamp="2026-01-24 07:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:09.200199571 +0000 UTC m=+1190.496304794" watchObservedRunningTime="2026-01-24 07:13:09.205619832 +0000 UTC m=+1190.501725055" Jan 24 07:13:09 crc kubenswrapper[4675]: I0124 07:13:09.988786 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:10 crc kubenswrapper[4675]: I0124 07:13:10.216451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c678b4d9-4b62-4225-b60b-06753bb72445","Type":"ContainerStarted","Data":"551823288a4d17c772de7117b904e28f6926fd39336d7f64ad216ca8563836bb"} Jan 24 07:13:10 crc kubenswrapper[4675]: I0124 07:13:10.216613 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api-log" containerID="cri-o://247e02fca5b905cd0d93b258db42bb0be8938ce0ebace5d52207500fcb2fd5de" gracePeriod=30 Jan 24 07:13:10 crc kubenswrapper[4675]: I0124 07:13:10.216934 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 24 07:13:10 crc kubenswrapper[4675]: I0124 07:13:10.217209 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api" containerID="cri-o://551823288a4d17c772de7117b904e28f6926fd39336d7f64ad216ca8563836bb" gracePeriod=30 Jan 24 07:13:10 crc kubenswrapper[4675]: I0124 07:13:10.242485 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.24246462 podStartE2EDuration="6.24246462s" podCreationTimestamp="2026-01-24 07:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:10.236771332 +0000 UTC m=+1191.532876575" watchObservedRunningTime="2026-01-24 07:13:10.24246462 +0000 UTC m=+1191.538569843" Jan 24 07:13:10 crc kubenswrapper[4675]: I0124 07:13:10.243050 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77c5f475df-4zndh" podStartSLOduration=5.243044104 podStartE2EDuration="5.243044104s" podCreationTimestamp="2026-01-24 07:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:09.229118819 +0000 UTC m=+1190.525224042" watchObservedRunningTime="2026-01-24 07:13:10.243044104 +0000 UTC m=+1191.539149327" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.248611 4675 generic.go:334] "Generic (PLEG): container finished" podID="c678b4d9-4b62-4225-b60b-06753bb72445" containerID="551823288a4d17c772de7117b904e28f6926fd39336d7f64ad216ca8563836bb" exitCode=0 Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.249344 4675 generic.go:334] "Generic (PLEG): container finished" podID="c678b4d9-4b62-4225-b60b-06753bb72445" containerID="247e02fca5b905cd0d93b258db42bb0be8938ce0ebace5d52207500fcb2fd5de" exitCode=143 Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.249318 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c678b4d9-4b62-4225-b60b-06753bb72445","Type":"ContainerDied","Data":"551823288a4d17c772de7117b904e28f6926fd39336d7f64ad216ca8563836bb"} Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.249429 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c678b4d9-4b62-4225-b60b-06753bb72445","Type":"ContainerDied","Data":"247e02fca5b905cd0d93b258db42bb0be8938ce0ebace5d52207500fcb2fd5de"} Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.302364 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.383647 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.386461 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-558lh\" (UniqueName: \"kubernetes.io/projected/c678b4d9-4b62-4225-b60b-06753bb72445-kube-api-access-558lh\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.386709 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c678b4d9-4b62-4225-b60b-06753bb72445-logs\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.386821 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c678b4d9-4b62-4225-b60b-06753bb72445-etc-machine-id\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.386855 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data-custom\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.387041 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c678b4d9-4b62-4225-b60b-06753bb72445-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.387190 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-scripts\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.387217 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-combined-ca-bundle\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.387544 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c678b4d9-4b62-4225-b60b-06753bb72445-logs" (OuterVolumeSpecName: "logs") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.388117 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c678b4d9-4b62-4225-b60b-06753bb72445-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.388182 4675 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c678b4d9-4b62-4225-b60b-06753bb72445-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.394852 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c678b4d9-4b62-4225-b60b-06753bb72445-kube-api-access-558lh" (OuterVolumeSpecName: "kube-api-access-558lh") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "kube-api-access-558lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.400053 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.401876 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-scripts" (OuterVolumeSpecName: "scripts") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.489839 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.490853 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.490922 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-558lh\" (UniqueName: \"kubernetes.io/projected/c678b4d9-4b62-4225-b60b-06753bb72445-kube-api-access-558lh\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.532049 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.543760 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data" (OuterVolumeSpecName: "config-data") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.592875 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.593132 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.682784 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-79656b6bf8-nwng8"] Jan 24 07:13:11 crc kubenswrapper[4675]: E0124 07:13:11.683150 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-api" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683167 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-api" Jan 24 07:13:11 crc kubenswrapper[4675]: E0124 07:13:11.683195 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api-log" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683201 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api-log" Jan 24 07:13:11 crc kubenswrapper[4675]: E0124 07:13:11.683212 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-httpd" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683218 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-httpd" Jan 24 07:13:11 crc kubenswrapper[4675]: E0124 07:13:11.683233 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683240 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683394 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api-log" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683416 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-api" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683432 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-httpd" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683441 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.684582 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.688635 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.688818 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.713551 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79656b6bf8-nwng8"] Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796207 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-combined-ca-bundle\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796262 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-config-data\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796295 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-internal-tls-certs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796318 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-public-tls-certs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796448 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldqkz\" (UniqueName: \"kubernetes.io/projected/17e03478-4656-43f8-8d7b-5dfb1ff160a1-kube-api-access-ldqkz\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796467 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-config-data-custom\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796600 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e03478-4656-43f8-8d7b-5dfb1ff160a1-logs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898586 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e03478-4656-43f8-8d7b-5dfb1ff160a1-logs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898686 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-combined-ca-bundle\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898740 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-config-data\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898777 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-internal-tls-certs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898803 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-public-tls-certs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898896 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldqkz\" (UniqueName: \"kubernetes.io/projected/17e03478-4656-43f8-8d7b-5dfb1ff160a1-kube-api-access-ldqkz\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898930 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-config-data-custom\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.900112 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e03478-4656-43f8-8d7b-5dfb1ff160a1-logs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.913300 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-combined-ca-bundle\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.922216 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-public-tls-certs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.923087 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-internal-tls-certs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.926807 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldqkz\" (UniqueName: \"kubernetes.io/projected/17e03478-4656-43f8-8d7b-5dfb1ff160a1-kube-api-access-ldqkz\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.927056 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-config-data-custom\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.929104 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-config-data\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.011271 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.273473 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9646bdbd7-ww6xm" event={"ID":"be4ebeb1-6268-4363-948f-8f9aa8f61fe9","Type":"ContainerStarted","Data":"172fca2ea66c7a098d930f88a6c77967b30ec77e35d1c627a2ac60c7517c2d53"} Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.273848 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9646bdbd7-ww6xm" event={"ID":"be4ebeb1-6268-4363-948f-8f9aa8f61fe9","Type":"ContainerStarted","Data":"c0520aff7546d042d6808c308677af9396e47ba5ae36c423c45c723550f7c668"} Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.293862 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" event={"ID":"3c5d104c-9f26-49fd-bec5-f62a53503d42","Type":"ContainerStarted","Data":"11922f6491c30ae72c9053675e373018b68132bf345912487af17dd08467bb6d"} Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.293917 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" event={"ID":"3c5d104c-9f26-49fd-bec5-f62a53503d42","Type":"ContainerStarted","Data":"ef45fd3ca461dda1926595bea60474d27f5b84f19ee20d00c31af63403af0442"} Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.319849 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-9646bdbd7-ww6xm" podStartSLOduration=3.481292273 podStartE2EDuration="9.319830904s" podCreationTimestamp="2026-01-24 07:13:03 +0000 UTC" firstStartedPulling="2026-01-24 07:13:05.092473216 +0000 UTC m=+1186.388578439" lastFinishedPulling="2026-01-24 07:13:10.931011847 +0000 UTC m=+1192.227117070" observedRunningTime="2026-01-24 07:13:12.299044523 +0000 UTC m=+1193.595149746" watchObservedRunningTime="2026-01-24 07:13:12.319830904 +0000 UTC m=+1193.615936127" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.328697 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c678b4d9-4b62-4225-b60b-06753bb72445","Type":"ContainerDied","Data":"822c1c454fd0151d45480f18166086e5ee1d458af466ed3b7a6d7251dabea3aa"} Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.328748 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.328959 4675 scope.go:117] "RemoveContainer" containerID="551823288a4d17c772de7117b904e28f6926fd39336d7f64ad216ca8563836bb" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.347435 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"46bd6c12-adaa-4fef-9ed2-4111468e21a4","Type":"ContainerStarted","Data":"a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6"} Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.350681 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" podStartSLOduration=3.879686423 podStartE2EDuration="9.350663758s" podCreationTimestamp="2026-01-24 07:13:03 +0000 UTC" firstStartedPulling="2026-01-24 07:13:05.464291455 +0000 UTC m=+1186.760396678" lastFinishedPulling="2026-01-24 07:13:10.93526879 +0000 UTC m=+1192.231374013" observedRunningTime="2026-01-24 07:13:12.327066968 +0000 UTC m=+1193.623172191" watchObservedRunningTime="2026-01-24 07:13:12.350663758 +0000 UTC m=+1193.646768981" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.404549 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.828277379 podStartE2EDuration="9.404532418s" podCreationTimestamp="2026-01-24 07:13:03 +0000 UTC" firstStartedPulling="2026-01-24 07:13:05.431598275 +0000 UTC m=+1186.727703488" lastFinishedPulling="2026-01-24 07:13:07.007853304 +0000 UTC m=+1188.303958527" observedRunningTime="2026-01-24 07:13:12.379105803 +0000 UTC m=+1193.675211026" watchObservedRunningTime="2026-01-24 07:13:12.404532418 +0000 UTC m=+1193.700637641" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.449986 4675 scope.go:117] "RemoveContainer" containerID="247e02fca5b905cd0d93b258db42bb0be8938ce0ebace5d52207500fcb2fd5de" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.461836 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.485263 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.499432 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.501079 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.512585 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.513332 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.513578 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.513785 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.618709 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79656b6bf8-nwng8"] Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.622644 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.622698 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f870976e-13a5-4226-9eff-18a3244582e8-logs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.622744 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-config-data\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.622789 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdhm9\" (UniqueName: \"kubernetes.io/projected/f870976e-13a5-4226-9eff-18a3244582e8-kube-api-access-hdhm9\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.622827 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-config-data-custom\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.622891 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.623002 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-scripts\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.623053 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f870976e-13a5-4226-9eff-18a3244582e8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.623111 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734756 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734865 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-scripts\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734896 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f870976e-13a5-4226-9eff-18a3244582e8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734924 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734963 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734983 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f870976e-13a5-4226-9eff-18a3244582e8-logs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734998 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-config-data\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.735024 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdhm9\" (UniqueName: \"kubernetes.io/projected/f870976e-13a5-4226-9eff-18a3244582e8-kube-api-access-hdhm9\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.735393 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-config-data-custom\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.736114 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f870976e-13a5-4226-9eff-18a3244582e8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.736364 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f870976e-13a5-4226-9eff-18a3244582e8-logs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.745391 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.745820 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-scripts\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.746016 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.746303 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-config-data-custom\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.749662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-config-data\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.750239 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.764962 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdhm9\" (UniqueName: \"kubernetes.io/projected/f870976e-13a5-4226-9eff-18a3244582e8-kube-api-access-hdhm9\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.829785 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.957149 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" path="/var/lib/kubelet/pods/c678b4d9-4b62-4225-b60b-06753bb72445/volumes" Jan 24 07:13:13 crc kubenswrapper[4675]: I0124 07:13:13.364735 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:13 crc kubenswrapper[4675]: I0124 07:13:13.374757 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79656b6bf8-nwng8" event={"ID":"17e03478-4656-43f8-8d7b-5dfb1ff160a1","Type":"ContainerStarted","Data":"565fdc104dc9f25ad7bebfbde21cc227956f2152df8f23f7faa79ca875aa4f6e"} Jan 24 07:13:13 crc kubenswrapper[4675]: I0124 07:13:13.374805 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79656b6bf8-nwng8" event={"ID":"17e03478-4656-43f8-8d7b-5dfb1ff160a1","Type":"ContainerStarted","Data":"05fd68b1a31a310dbf5958733a66abbf6b9d393f1b1d6331ab684062134aa61a"} Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.312227 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-656ff794dd-jx8ld" podUID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.312944 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.313793 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"7f1a6675a950b42c9ecbccbf0a4fb33df3e31b81c67e7165b82b4f582a3574f1"} pod="openstack/horizon-656ff794dd-jx8ld" containerMessage="Container horizon failed startup probe, will be restarted" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.313907 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-656ff794dd-jx8ld" podUID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerName="horizon" containerID="cri-o://7f1a6675a950b42c9ecbccbf0a4fb33df3e31b81c67e7165b82b4f582a3574f1" gracePeriod=30 Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.364431 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.365796 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.159:8080/\": dial tcp 10.217.0.159:8080: connect: connection refused" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.414982 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f870976e-13a5-4226-9eff-18a3244582e8","Type":"ContainerStarted","Data":"0875375a259484ea75e6fab6e15e9ca59a92c2605e4a9463ffd7e91a50ab446d"} Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.415033 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f870976e-13a5-4226-9eff-18a3244582e8","Type":"ContainerStarted","Data":"1a180ff61c05b88618b110f5bf4206d30d93233c1d6b71ba38239792edb30cc8"} Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.428337 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79656b6bf8-nwng8" event={"ID":"17e03478-4656-43f8-8d7b-5dfb1ff160a1","Type":"ContainerStarted","Data":"a900a5a55271a45364da2d4a0eaa988541fd7e143f83b24101bd2b062bb452c6"} Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.428653 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.428673 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.493088 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-79656b6bf8-nwng8" podStartSLOduration=3.493066531 podStartE2EDuration="3.493066531s" podCreationTimestamp="2026-01-24 07:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:14.479080003 +0000 UTC m=+1195.775185216" watchObservedRunningTime="2026-01-24 07:13:14.493066531 +0000 UTC m=+1195.789171754" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.990922 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.059007 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-zrk79"] Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.059280 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerName="dnsmasq-dns" containerID="cri-o://13389cde8deb313942b430bed79fe311467f7a70f6bafc3afe90b1cc763d7d42" gracePeriod=10 Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.481136 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f870976e-13a5-4226-9eff-18a3244582e8","Type":"ContainerStarted","Data":"bec4e19d24c4e8e817d0b266ad5c4080a43f29c5af6cd880e38f03bad869bcaa"} Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.482712 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.493697 4675 generic.go:334] "Generic (PLEG): container finished" podID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerID="13389cde8deb313942b430bed79fe311467f7a70f6bafc3afe90b1cc763d7d42" exitCode=0 Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.494638 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" event={"ID":"ce054cb0-d2ad-4960-9078-d977ce3ca9e6","Type":"ContainerDied","Data":"13389cde8deb313942b430bed79fe311467f7a70f6bafc3afe90b1cc763d7d42"} Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.514102 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.514083696 podStartE2EDuration="3.514083696s" podCreationTimestamp="2026-01-24 07:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:15.504604928 +0000 UTC m=+1196.800710151" watchObservedRunningTime="2026-01-24 07:13:15.514083696 +0000 UTC m=+1196.810188919" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.669863 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.819603 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-sb\") pod \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.819925 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjw4l\" (UniqueName: \"kubernetes.io/projected/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-kube-api-access-xjw4l\") pod \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.819993 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-nb\") pod \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.820066 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-swift-storage-0\") pod \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.820141 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-config\") pod \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.820189 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-svc\") pod \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.834974 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-kube-api-access-xjw4l" (OuterVolumeSpecName: "kube-api-access-xjw4l") pod "ce054cb0-d2ad-4960-9078-d977ce3ca9e6" (UID: "ce054cb0-d2ad-4960-9078-d977ce3ca9e6"). InnerVolumeSpecName "kube-api-access-xjw4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.909322 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce054cb0-d2ad-4960-9078-d977ce3ca9e6" (UID: "ce054cb0-d2ad-4960-9078-d977ce3ca9e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.909755 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-config" (OuterVolumeSpecName: "config") pod "ce054cb0-d2ad-4960-9078-d977ce3ca9e6" (UID: "ce054cb0-d2ad-4960-9078-d977ce3ca9e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.919124 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce054cb0-d2ad-4960-9078-d977ce3ca9e6" (UID: "ce054cb0-d2ad-4960-9078-d977ce3ca9e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.922570 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.922595 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.922604 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjw4l\" (UniqueName: \"kubernetes.io/projected/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-kube-api-access-xjw4l\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.922614 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.925702 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce054cb0-d2ad-4960-9078-d977ce3ca9e6" (UID: "ce054cb0-d2ad-4960-9078-d977ce3ca9e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.930653 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ce054cb0-d2ad-4960-9078-d977ce3ca9e6" (UID: "ce054cb0-d2ad-4960-9078-d977ce3ca9e6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.024587 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.024644 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.505692 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" event={"ID":"ce054cb0-d2ad-4960-9078-d977ce3ca9e6","Type":"ContainerDied","Data":"b1d2359f9bd1730fd38d06c61fbb2923f790bf4bfe4ea9760488e068602ba6b1"} Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.505781 4675 scope.go:117] "RemoveContainer" containerID="13389cde8deb313942b430bed79fe311467f7a70f6bafc3afe90b1cc763d7d42" Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.505748 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.545862 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-zrk79"] Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.558020 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-zrk79"] Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.614799 4675 scope.go:117] "RemoveContainer" containerID="a357ef50188a1acd7da313f1d5fc0be108c9ca15168c4882b220cb0612f377a4" Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.953411 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" path="/var/lib/kubelet/pods/ce054cb0-d2ad-4960-9078-d977ce3ca9e6/volumes" Jan 24 07:13:17 crc kubenswrapper[4675]: I0124 07:13:17.638621 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:17 crc kubenswrapper[4675]: I0124 07:13:17.764607 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:18 crc kubenswrapper[4675]: I0124 07:13:18.188838 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:13:19 crc kubenswrapper[4675]: I0124 07:13:19.043121 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:19 crc kubenswrapper[4675]: I0124 07:13:19.743384 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 24 07:13:19 crc kubenswrapper[4675]: I0124 07:13:19.796999 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:20 crc kubenswrapper[4675]: I0124 07:13:20.548362 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="cinder-scheduler" containerID="cri-o://c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055" gracePeriod=30 Jan 24 07:13:20 crc kubenswrapper[4675]: I0124 07:13:20.548522 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="probe" containerID="cri-o://a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6" gracePeriod=30 Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.000446 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.100808 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6dff87ccf4-s6k69"] Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.101088 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6dff87ccf4-s6k69" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api-log" containerID="cri-o://e8bccace6fcb2244f7e0a94668ba27679d5c9bd93341c87183cda6405d20bba8" gracePeriod=30 Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.101904 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6dff87ccf4-s6k69" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api" containerID="cri-o://2ab48c35f8779185c47e215c2aabb98ee054565141a51e8dee3bcb4b8f15cbbb" gracePeriod=30 Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.107823 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dff87ccf4-s6k69" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": EOF" Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.570991 4675 generic.go:334] "Generic (PLEG): container finished" podID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerID="e8bccace6fcb2244f7e0a94668ba27679d5c9bd93341c87183cda6405d20bba8" exitCode=143 Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.571039 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dff87ccf4-s6k69" event={"ID":"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f","Type":"ContainerDied","Data":"e8bccace6fcb2244f7e0a94668ba27679d5c9bd93341c87183cda6405d20bba8"} Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.626381 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.820846 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.874404 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:13:22 crc kubenswrapper[4675]: I0124 07:13:22.456475 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:13:22 crc kubenswrapper[4675]: I0124 07:13:22.580242 4675 generic.go:334] "Generic (PLEG): container finished" podID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerID="a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6" exitCode=0 Jan 24 07:13:22 crc kubenswrapper[4675]: I0124 07:13:22.580317 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"46bd6c12-adaa-4fef-9ed2-4111468e21a4","Type":"ContainerDied","Data":"a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6"} Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.493679 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.579641 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-scripts\") pod \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.579701 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt4p5\" (UniqueName: \"kubernetes.io/projected/46bd6c12-adaa-4fef-9ed2-4111468e21a4-kube-api-access-mt4p5\") pod \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.579748 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-combined-ca-bundle\") pod \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.579810 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data-custom\") pod \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.579848 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data\") pod \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.579895 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46bd6c12-adaa-4fef-9ed2-4111468e21a4-etc-machine-id\") pod \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.580356 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46bd6c12-adaa-4fef-9ed2-4111468e21a4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "46bd6c12-adaa-4fef-9ed2-4111468e21a4" (UID: "46bd6c12-adaa-4fef-9ed2-4111468e21a4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.590965 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-scripts" (OuterVolumeSpecName: "scripts") pod "46bd6c12-adaa-4fef-9ed2-4111468e21a4" (UID: "46bd6c12-adaa-4fef-9ed2-4111468e21a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.592908 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "46bd6c12-adaa-4fef-9ed2-4111468e21a4" (UID: "46bd6c12-adaa-4fef-9ed2-4111468e21a4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.593478 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46bd6c12-adaa-4fef-9ed2-4111468e21a4-kube-api-access-mt4p5" (OuterVolumeSpecName: "kube-api-access-mt4p5") pod "46bd6c12-adaa-4fef-9ed2-4111468e21a4" (UID: "46bd6c12-adaa-4fef-9ed2-4111468e21a4"). InnerVolumeSpecName "kube-api-access-mt4p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.628620 4675 generic.go:334] "Generic (PLEG): container finished" podID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerID="c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055" exitCode=0 Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.628669 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"46bd6c12-adaa-4fef-9ed2-4111468e21a4","Type":"ContainerDied","Data":"c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055"} Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.628699 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"46bd6c12-adaa-4fef-9ed2-4111468e21a4","Type":"ContainerDied","Data":"d654f868dd8eaa55d5aced57050491d4bfc469df89f624ef9e17f9340f8f21b9"} Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.628735 4675 scope.go:117] "RemoveContainer" containerID="a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.628901 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.673988 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46bd6c12-adaa-4fef-9ed2-4111468e21a4" (UID: "46bd6c12-adaa-4fef-9ed2-4111468e21a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.700309 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.700338 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt4p5\" (UniqueName: \"kubernetes.io/projected/46bd6c12-adaa-4fef-9ed2-4111468e21a4-kube-api-access-mt4p5\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.700348 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.700358 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.700369 4675 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46bd6c12-adaa-4fef-9ed2-4111468e21a4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.721305 4675 scope.go:117] "RemoveContainer" containerID="c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.766817 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data" (OuterVolumeSpecName: "config-data") pod "46bd6c12-adaa-4fef-9ed2-4111468e21a4" (UID: "46bd6c12-adaa-4fef-9ed2-4111468e21a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.773881 4675 scope.go:117] "RemoveContainer" containerID="a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6" Jan 24 07:13:23 crc kubenswrapper[4675]: E0124 07:13:23.774614 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6\": container with ID starting with a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6 not found: ID does not exist" containerID="a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.774693 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6"} err="failed to get container status \"a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6\": rpc error: code = NotFound desc = could not find container \"a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6\": container with ID starting with a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6 not found: ID does not exist" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.774767 4675 scope.go:117] "RemoveContainer" containerID="c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055" Jan 24 07:13:23 crc kubenswrapper[4675]: E0124 07:13:23.775091 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055\": container with ID starting with c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055 not found: ID does not exist" containerID="c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.775139 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055"} err="failed to get container status \"c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055\": rpc error: code = NotFound desc = could not find container \"c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055\": container with ID starting with c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055 not found: ID does not exist" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.802044 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.965925 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.978383 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002040 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.002437 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="cinder-scheduler" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002453 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="cinder-scheduler" Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.002463 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerName="dnsmasq-dns" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002470 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerName="dnsmasq-dns" Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.002494 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="probe" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002501 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="probe" Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.002517 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerName="init" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002523 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerName="init" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002832 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="cinder-scheduler" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002849 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerName="dnsmasq-dns" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002860 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="probe" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.005886 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.007797 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.027482 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.105942 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.105993 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn2st\" (UniqueName: \"kubernetes.io/projected/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-kube-api-access-rn2st\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.106048 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.106142 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-scripts\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.106194 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.106231 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-config-data\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208052 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208167 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-scripts\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208220 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208258 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-config-data\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208325 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208349 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn2st\" (UniqueName: \"kubernetes.io/projected/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-kube-api-access-rn2st\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208838 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.219961 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-scripts\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.220077 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.221426 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-config-data\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.242377 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn2st\" (UniqueName: \"kubernetes.io/projected/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-kube-api-access-rn2st\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.242625 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.354519 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.378092 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.381576 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.396224 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-fcqz5" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.398506 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.402009 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.486831 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.522798 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config-secret\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.522871 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6zbn\" (UniqueName: \"kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.522902 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.522934 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.625695 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config-secret\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.625786 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6zbn\" (UniqueName: \"kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.625814 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.625845 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.635547 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.642584 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.645476 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config-secret\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.671519 4675 projected.go:194] Error preparing data for projected volume kube-api-access-j6zbn for pod openstack/openstackclient: failed to fetch token: pod "openstackclient" not found Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.671578 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn podName:64489b46-b7cd-4c35-976d-c8397add424a nodeName:}" failed. No retries permitted until 2026-01-24 07:13:25.171560677 +0000 UTC m=+1206.467665890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j6zbn" (UniqueName: "kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn") pod "openstackclient" (UID: "64489b46-b7cd-4c35-976d-c8397add424a") : failed to fetch token: pod "openstackclient" not found Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.671849 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.678337 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-j6zbn], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="64489b46-b7cd-4c35-976d-c8397add424a" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.692412 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.716955 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.720781 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.723582 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.839712 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lpk7\" (UniqueName: \"kubernetes.io/projected/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-kube-api-access-5lpk7\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.839770 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-openstack-config\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.839795 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.839830 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-openstack-config-secret\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.941279 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.941345 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-openstack-config-secret\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.941484 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lpk7\" (UniqueName: \"kubernetes.io/projected/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-kube-api-access-5lpk7\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.941510 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-openstack-config\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.942432 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-openstack-config\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.956463 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.957424 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-openstack-config-secret\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.964004 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" path="/var/lib/kubelet/pods/46bd6c12-adaa-4fef-9ed2-4111468e21a4/volumes" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.969116 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lpk7\" (UniqueName: \"kubernetes.io/projected/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-kube-api-access-5lpk7\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.037475 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.046133 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.255001 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6zbn\" (UniqueName: \"kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:25 crc kubenswrapper[4675]: E0124 07:13:25.259154 4675 projected.go:194] Error preparing data for projected volume kube-api-access-j6zbn for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (64489b46-b7cd-4c35-976d-c8397add424a) does not match the UID in record. The object might have been deleted and then recreated Jan 24 07:13:25 crc kubenswrapper[4675]: E0124 07:13:25.259366 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn podName:64489b46-b7cd-4c35-976d-c8397add424a nodeName:}" failed. No retries permitted until 2026-01-24 07:13:26.259311433 +0000 UTC m=+1207.555416656 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-j6zbn" (UniqueName: "kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn") pod "openstackclient" (UID: "64489b46-b7cd-4c35-976d-c8397add424a") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (64489b46-b7cd-4c35-976d-c8397add424a) does not match the UID in record. The object might have been deleted and then recreated Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.535373 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.594573 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dff87ccf4-s6k69" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:44402->10.217.0.164:9311: read: connection reset by peer" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.594588 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dff87ccf4-s6k69" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:44390->10.217.0.164:9311: read: connection reset by peer" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.632498 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.717890 4675 generic.go:334] "Generic (PLEG): container finished" podID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerID="2ab48c35f8779185c47e215c2aabb98ee054565141a51e8dee3bcb4b8f15cbbb" exitCode=0 Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.718218 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dff87ccf4-s6k69" event={"ID":"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f","Type":"ContainerDied","Data":"2ab48c35f8779185c47e215c2aabb98ee054565141a51e8dee3bcb4b8f15cbbb"} Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.719645 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb","Type":"ContainerStarted","Data":"401ec6dfa8313b57ca54d42eb230a3fde3c398209ea32fd8f590b1acc9c09400"} Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.724579 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.724798 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31cacad0-4d32-4300-8bdc-bbf15fcd77ac","Type":"ContainerStarted","Data":"493266e067a9bfb24e67415a7a14e73736f5b13343aedd0dc2c00fe60dbecaa9"} Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.756866 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.763949 4675 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="64489b46-b7cd-4c35-976d-c8397add424a" podUID="2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.878157 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-combined-ca-bundle\") pod \"64489b46-b7cd-4c35-976d-c8397add424a\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.878294 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config-secret\") pod \"64489b46-b7cd-4c35-976d-c8397add424a\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.878567 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config\") pod \"64489b46-b7cd-4c35-976d-c8397add424a\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.879266 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6zbn\" (UniqueName: \"kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.879804 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "64489b46-b7cd-4c35-976d-c8397add424a" (UID: "64489b46-b7cd-4c35-976d-c8397add424a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.891620 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64489b46-b7cd-4c35-976d-c8397add424a" (UID: "64489b46-b7cd-4c35-976d-c8397add424a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.906826 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "64489b46-b7cd-4c35-976d-c8397add424a" (UID: "64489b46-b7cd-4c35-976d-c8397add424a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.981019 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.981044 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.981054 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.166687 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.286002 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9mvd\" (UniqueName: \"kubernetes.io/projected/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-kube-api-access-g9mvd\") pod \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.286046 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-logs\") pod \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.286101 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-combined-ca-bundle\") pod \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.286173 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data-custom\") pod \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.286237 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data\") pod \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.286890 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-logs" (OuterVolumeSpecName: "logs") pod "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" (UID: "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.290905 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" (UID: "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.292054 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-kube-api-access-g9mvd" (OuterVolumeSpecName: "kube-api-access-g9mvd") pod "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" (UID: "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f"). InnerVolumeSpecName "kube-api-access-g9mvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.321609 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" (UID: "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.356814 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data" (OuterVolumeSpecName: "config-data") pod "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" (UID: "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.388344 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.388380 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9mvd\" (UniqueName: \"kubernetes.io/projected/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-kube-api-access-g9mvd\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.388392 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.388401 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.388410 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.754105 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31cacad0-4d32-4300-8bdc-bbf15fcd77ac","Type":"ContainerStarted","Data":"5575dec237a9f5d9480ea08e2ecf8aeed6f01b17856974322269973a088ec027"} Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.760703 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dff87ccf4-s6k69" event={"ID":"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f","Type":"ContainerDied","Data":"ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196"} Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.760763 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.760776 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.760772 4675 scope.go:117] "RemoveContainer" containerID="2ab48c35f8779185c47e215c2aabb98ee054565141a51e8dee3bcb4b8f15cbbb" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.780034 4675 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="64489b46-b7cd-4c35-976d-c8397add424a" podUID="2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.791179 4675 scope.go:117] "RemoveContainer" containerID="e8bccace6fcb2244f7e0a94668ba27679d5c9bd93341c87183cda6405d20bba8" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.800589 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6dff87ccf4-s6k69"] Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.809984 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6dff87ccf4-s6k69"] Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.952431 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64489b46-b7cd-4c35-976d-c8397add424a" path="/var/lib/kubelet/pods/64489b46-b7cd-4c35-976d-c8397add424a/volumes" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.952857 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" path="/var/lib/kubelet/pods/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f/volumes" Jan 24 07:13:27 crc kubenswrapper[4675]: I0124 07:13:27.015880 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79656b6bf8-nwng8" podUID="17e03478-4656-43f8-8d7b-5dfb1ff160a1" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.167:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:13:27 crc kubenswrapper[4675]: I0124 07:13:27.540213 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 24 07:13:27 crc kubenswrapper[4675]: I0124 07:13:27.855780 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31cacad0-4d32-4300-8bdc-bbf15fcd77ac","Type":"ContainerStarted","Data":"8f81671f1a461ee9b613e2b6c9f1be3c3719200523723bddb7e92d631dfe28f8"} Jan 24 07:13:27 crc kubenswrapper[4675]: I0124 07:13:27.878788 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.878770521 podStartE2EDuration="4.878770521s" podCreationTimestamp="2026-01-24 07:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:27.876536768 +0000 UTC m=+1209.172642001" watchObservedRunningTime="2026-01-24 07:13:27.878770521 +0000 UTC m=+1209.174875744" Jan 24 07:13:29 crc kubenswrapper[4675]: I0124 07:13:29.355192 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.646971 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5875964765-b68mp"] Jan 24 07:13:31 crc kubenswrapper[4675]: E0124 07:13:31.648295 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.648310 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api" Jan 24 07:13:31 crc kubenswrapper[4675]: E0124 07:13:31.648366 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api-log" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.648373 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api-log" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.650744 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api-log" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.650773 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.656983 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.667433 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.667762 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.668489 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.708186 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1443f8-8586-4757-9637-378c7c88787d-log-httpd\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.708294 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9j4l\" (UniqueName: \"kubernetes.io/projected/fa1443f8-8586-4757-9637-378c7c88787d-kube-api-access-t9j4l\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.708437 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-public-tls-certs\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.708470 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1443f8-8586-4757-9637-378c7c88787d-run-httpd\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.708527 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-config-data\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.709865 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5875964765-b68mp"] Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.713175 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-internal-tls-certs\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.713339 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-combined-ca-bundle\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.713508 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa1443f8-8586-4757-9637-378c7c88787d-etc-swift\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815698 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa1443f8-8586-4757-9637-378c7c88787d-etc-swift\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815776 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1443f8-8586-4757-9637-378c7c88787d-log-httpd\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815818 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9j4l\" (UniqueName: \"kubernetes.io/projected/fa1443f8-8586-4757-9637-378c7c88787d-kube-api-access-t9j4l\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815877 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-public-tls-certs\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815911 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1443f8-8586-4757-9637-378c7c88787d-run-httpd\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815951 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-config-data\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815973 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-internal-tls-certs\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.816004 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-combined-ca-bundle\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.816364 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1443f8-8586-4757-9637-378c7c88787d-log-httpd\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.816616 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1443f8-8586-4757-9637-378c7c88787d-run-httpd\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.824010 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-internal-tls-certs\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.824159 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-combined-ca-bundle\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.827103 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-config-data\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.829501 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-public-tls-certs\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.831404 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa1443f8-8586-4757-9637-378c7c88787d-etc-swift\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.833139 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9j4l\" (UniqueName: \"kubernetes.io/projected/fa1443f8-8586-4757-9637-378c7c88787d-kube-api-access-t9j4l\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.996870 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:34 crc kubenswrapper[4675]: I0124 07:13:34.549112 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 24 07:13:34 crc kubenswrapper[4675]: I0124 07:13:34.925673 4675 generic.go:334] "Generic (PLEG): container finished" podID="62b7e06f-b840-408c-b026-a086b975812f" containerID="8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8" exitCode=137 Jan 24 07:13:34 crc kubenswrapper[4675]: I0124 07:13:34.925839 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerDied","Data":"8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8"} Jan 24 07:13:35 crc kubenswrapper[4675]: I0124 07:13:35.605290 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:35 crc kubenswrapper[4675]: I0124 07:13:35.680682 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67cddfd9dd-rbhzj"] Jan 24 07:13:35 crc kubenswrapper[4675]: I0124 07:13:35.681223 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67cddfd9dd-rbhzj" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-api" containerID="cri-o://5ad36d71e3b73e26c65c39abbe42993d524b7ec51d9571439c232b730197cdc0" gracePeriod=30 Jan 24 07:13:35 crc kubenswrapper[4675]: I0124 07:13:35.681296 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67cddfd9dd-rbhzj" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-httpd" containerID="cri-o://7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8" gracePeriod=30 Jan 24 07:13:35 crc kubenswrapper[4675]: I0124 07:13:35.960184 4675 generic.go:334] "Generic (PLEG): container finished" podID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerID="7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8" exitCode=0 Jan 24 07:13:35 crc kubenswrapper[4675]: I0124 07:13:35.960838 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cddfd9dd-rbhzj" event={"ID":"d7b4aa87-c092-4624-bd65-c9393dd36098","Type":"ContainerDied","Data":"7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8"} Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.357990 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462282 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-scripts\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462380 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb8hn\" (UniqueName: \"kubernetes.io/projected/62b7e06f-b840-408c-b026-a086b975812f-kube-api-access-pb8hn\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462423 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-log-httpd\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462515 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-config-data\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462557 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-sg-core-conf-yaml\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462618 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-combined-ca-bundle\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462650 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-run-httpd\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.464476 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.464512 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.469621 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-scripts" (OuterVolumeSpecName: "scripts") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.470643 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b7e06f-b840-408c-b026-a086b975812f-kube-api-access-pb8hn" (OuterVolumeSpecName: "kube-api-access-pb8hn") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "kube-api-access-pb8hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.503087 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.546291 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-config-data" (OuterVolumeSpecName: "config-data") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.564207 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.564238 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.564247 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.564256 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.564264 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.564271 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb8hn\" (UniqueName: \"kubernetes.io/projected/62b7e06f-b840-408c-b026-a086b975812f-kube-api-access-pb8hn\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.577835 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.630241 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5875964765-b68mp"] Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.665518 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.024029 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerDied","Data":"0ad4338e6f939f6bda642b2d5397708669ef3b6004444834c598ae8f3b747800"} Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.024088 4675 scope.go:117] "RemoveContainer" containerID="8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.024216 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.037070 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb","Type":"ContainerStarted","Data":"45c28282008767484a5f0789dd7a4805559890b72e9c3b147dc8c5ceaff39827"} Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.050643 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5875964765-b68mp" event={"ID":"fa1443f8-8586-4757-9637-378c7c88787d","Type":"ContainerStarted","Data":"df36ca04e0b2e233bc6b4351aa2079ede41ed02087949b0bc03afaf1ce76fa26"} Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.050687 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5875964765-b68mp" event={"ID":"fa1443f8-8586-4757-9637-378c7c88787d","Type":"ContainerStarted","Data":"4d45c7ff32645e7c68a19063b5f235827135ecd5d3374c900e986f5e650ea018"} Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.071829 4675 scope.go:117] "RemoveContainer" containerID="9de1cb80f6e48e728da957f71f2dc3c5adb5e1352f3a0c6647494ce1109b92eb" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.075972 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.081291 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.113263 4675 scope.go:117] "RemoveContainer" containerID="0ff4885d5dbc856385bb82616203fe2d9ca31f546f0610abde226a41b839fc48" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.138488 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.923873339 podStartE2EDuration="15.138465415s" podCreationTimestamp="2026-01-24 07:13:24 +0000 UTC" firstStartedPulling="2026-01-24 07:13:25.686069885 +0000 UTC m=+1206.982175108" lastFinishedPulling="2026-01-24 07:13:37.900661961 +0000 UTC m=+1219.196767184" observedRunningTime="2026-01-24 07:13:39.095858058 +0000 UTC m=+1220.391963281" watchObservedRunningTime="2026-01-24 07:13:39.138465415 +0000 UTC m=+1220.434570638" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.203565 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:39 crc kubenswrapper[4675]: E0124 07:13:39.204585 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="ceilometer-notification-agent" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.204680 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="ceilometer-notification-agent" Jan 24 07:13:39 crc kubenswrapper[4675]: E0124 07:13:39.204878 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="sg-core" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.204975 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="sg-core" Jan 24 07:13:39 crc kubenswrapper[4675]: E0124 07:13:39.205063 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="proxy-httpd" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.205116 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="proxy-httpd" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.205351 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="sg-core" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.205416 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="proxy-httpd" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.205480 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="ceilometer-notification-agent" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.208370 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.214514 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.215461 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.218579 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.349235 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-run-httpd\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.350282 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.350396 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.350487 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-config-data\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.350603 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-log-httpd\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.350750 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-scripts\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.350873 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47792\" (UniqueName: \"kubernetes.io/projected/918eda7b-6eff-4fb5-90d6-1b43a18787fb-kube-api-access-47792\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.452050 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-scripts\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.452842 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47792\" (UniqueName: \"kubernetes.io/projected/918eda7b-6eff-4fb5-90d6-1b43a18787fb-kube-api-access-47792\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.452985 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-run-httpd\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.453094 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.453168 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.453240 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-config-data\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.453323 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-log-httpd\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.453808 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-log-httpd\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.454336 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-run-httpd\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.464905 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-config-data\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.465960 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-scripts\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.467080 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.476954 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.480834 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47792\" (UniqueName: \"kubernetes.io/projected/918eda7b-6eff-4fb5-90d6-1b43a18787fb-kube-api-access-47792\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.548357 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.791005 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:40 crc kubenswrapper[4675]: I0124 07:13:40.059711 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5875964765-b68mp" event={"ID":"fa1443f8-8586-4757-9637-378c7c88787d","Type":"ContainerStarted","Data":"e27b28b41b3b1c983e1b3eb7f93898fa6d9cea3754d1d74ecbdf0f6ae277a284"} Jan 24 07:13:40 crc kubenswrapper[4675]: I0124 07:13:40.060767 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:40 crc kubenswrapper[4675]: I0124 07:13:40.060793 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:40 crc kubenswrapper[4675]: I0124 07:13:40.084030 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:40 crc kubenswrapper[4675]: I0124 07:13:40.111964 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5875964765-b68mp" podStartSLOduration=9.111946885 podStartE2EDuration="9.111946885s" podCreationTimestamp="2026-01-24 07:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:40.101763329 +0000 UTC m=+1221.397868562" watchObservedRunningTime="2026-01-24 07:13:40.111946885 +0000 UTC m=+1221.408052108" Jan 24 07:13:40 crc kubenswrapper[4675]: I0124 07:13:40.957932 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b7e06f-b840-408c-b026-a086b975812f" path="/var/lib/kubelet/pods/62b7e06f-b840-408c-b026-a086b975812f/volumes" Jan 24 07:13:41 crc kubenswrapper[4675]: I0124 07:13:41.073630 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerStarted","Data":"0c61230a10f189c874d0a49db9b6e2672e6d430929f425e547950a4f18245bc4"} Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.086916 4675 generic.go:334] "Generic (PLEG): container finished" podID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerID="5ad36d71e3b73e26c65c39abbe42993d524b7ec51d9571439c232b730197cdc0" exitCode=0 Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.087163 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cddfd9dd-rbhzj" event={"ID":"d7b4aa87-c092-4624-bd65-c9393dd36098","Type":"ContainerDied","Data":"5ad36d71e3b73e26c65c39abbe42993d524b7ec51d9571439c232b730197cdc0"} Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.094548 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerStarted","Data":"87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c"} Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.493023 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.616374 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-httpd-config\") pod \"d7b4aa87-c092-4624-bd65-c9393dd36098\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.616570 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-ovndb-tls-certs\") pod \"d7b4aa87-c092-4624-bd65-c9393dd36098\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.616611 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-combined-ca-bundle\") pod \"d7b4aa87-c092-4624-bd65-c9393dd36098\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.616661 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw44f\" (UniqueName: \"kubernetes.io/projected/d7b4aa87-c092-4624-bd65-c9393dd36098-kube-api-access-hw44f\") pod \"d7b4aa87-c092-4624-bd65-c9393dd36098\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.616685 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-config\") pod \"d7b4aa87-c092-4624-bd65-c9393dd36098\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.643766 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b4aa87-c092-4624-bd65-c9393dd36098-kube-api-access-hw44f" (OuterVolumeSpecName: "kube-api-access-hw44f") pod "d7b4aa87-c092-4624-bd65-c9393dd36098" (UID: "d7b4aa87-c092-4624-bd65-c9393dd36098"). InnerVolumeSpecName "kube-api-access-hw44f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.643913 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d7b4aa87-c092-4624-bd65-c9393dd36098" (UID: "d7b4aa87-c092-4624-bd65-c9393dd36098"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.720256 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw44f\" (UniqueName: \"kubernetes.io/projected/d7b4aa87-c092-4624-bd65-c9393dd36098-kube-api-access-hw44f\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.720305 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.745244 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d7b4aa87-c092-4624-bd65-c9393dd36098" (UID: "d7b4aa87-c092-4624-bd65-c9393dd36098"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.745334 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7b4aa87-c092-4624-bd65-c9393dd36098" (UID: "d7b4aa87-c092-4624-bd65-c9393dd36098"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.758953 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-config" (OuterVolumeSpecName: "config") pod "d7b4aa87-c092-4624-bd65-c9393dd36098" (UID: "d7b4aa87-c092-4624-bd65-c9393dd36098"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.821413 4675 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.821443 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.821455 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.104189 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cddfd9dd-rbhzj" event={"ID":"d7b4aa87-c092-4624-bd65-c9393dd36098","Type":"ContainerDied","Data":"9a865440f381a7417cf12468f043671da2b8b23ef036f738147c217bd9897103"} Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.104755 4675 scope.go:117] "RemoveContainer" containerID="7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.104476 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.107561 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerStarted","Data":"e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d"} Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.166625 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67cddfd9dd-rbhzj"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.171208 4675 scope.go:117] "RemoveContainer" containerID="5ad36d71e3b73e26c65c39abbe42993d524b7ec51d9571439c232b730197cdc0" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.182277 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-67cddfd9dd-rbhzj"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.218830 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-p847h"] Jan 24 07:13:43 crc kubenswrapper[4675]: E0124 07:13:43.219193 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-httpd" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.219211 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-httpd" Jan 24 07:13:43 crc kubenswrapper[4675]: E0124 07:13:43.219236 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-api" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.219243 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-api" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.219387 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-httpd" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.219414 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-api" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.220012 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.242295 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p847h"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.325789 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2gcsv"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.327887 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.335174 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spbfq\" (UniqueName: \"kubernetes.io/projected/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-kube-api-access-spbfq\") pod \"nova-api-db-create-p847h\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.335279 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-operator-scripts\") pod \"nova-api-db-create-p847h\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.352165 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2gcsv"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.417855 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-47cc-account-create-update-qbjjs"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.418975 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.425071 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.436420 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-operator-scripts\") pod \"nova-api-db-create-p847h\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.436489 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nmq8\" (UniqueName: \"kubernetes.io/projected/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-kube-api-access-7nmq8\") pod \"nova-cell0-db-create-2gcsv\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.436545 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-operator-scripts\") pod \"nova-cell0-db-create-2gcsv\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.436596 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spbfq\" (UniqueName: \"kubernetes.io/projected/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-kube-api-access-spbfq\") pod \"nova-api-db-create-p847h\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.437505 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-operator-scripts\") pod \"nova-api-db-create-p847h\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.452749 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-47cc-account-create-update-qbjjs"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.470311 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spbfq\" (UniqueName: \"kubernetes.io/projected/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-kube-api-access-spbfq\") pod \"nova-api-db-create-p847h\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.530367 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-4z8kz"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.536694 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.536761 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.538900 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nmq8\" (UniqueName: \"kubernetes.io/projected/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-kube-api-access-7nmq8\") pod \"nova-cell0-db-create-2gcsv\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.538968 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-operator-scripts\") pod \"nova-cell0-db-create-2gcsv\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.539026 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg679\" (UniqueName: \"kubernetes.io/projected/f8458b8a-6770-4e62-9848-55a9b142cb8c-kube-api-access-bg679\") pod \"nova-api-47cc-account-create-update-qbjjs\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.539075 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8458b8a-6770-4e62-9848-55a9b142cb8c-operator-scripts\") pod \"nova-api-47cc-account-create-update-qbjjs\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.539917 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-operator-scripts\") pod \"nova-cell0-db-create-2gcsv\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.572763 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nmq8\" (UniqueName: \"kubernetes.io/projected/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-kube-api-access-7nmq8\") pod \"nova-cell0-db-create-2gcsv\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.599741 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4z8kz"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.642883 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg679\" (UniqueName: \"kubernetes.io/projected/f8458b8a-6770-4e62-9848-55a9b142cb8c-kube-api-access-bg679\") pod \"nova-api-47cc-account-create-update-qbjjs\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.642976 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs7nr\" (UniqueName: \"kubernetes.io/projected/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-kube-api-access-hs7nr\") pod \"nova-cell1-db-create-4z8kz\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.643000 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-operator-scripts\") pod \"nova-cell1-db-create-4z8kz\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.643030 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8458b8a-6770-4e62-9848-55a9b142cb8c-operator-scripts\") pod \"nova-api-47cc-account-create-update-qbjjs\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.644140 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8458b8a-6770-4e62-9848-55a9b142cb8c-operator-scripts\") pod \"nova-api-47cc-account-create-update-qbjjs\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.658146 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.668378 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg679\" (UniqueName: \"kubernetes.io/projected/f8458b8a-6770-4e62-9848-55a9b142cb8c-kube-api-access-bg679\") pod \"nova-api-47cc-account-create-update-qbjjs\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.690034 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1e3e-account-create-update-z84p9"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.692526 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.702132 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.727775 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1e3e-account-create-update-z84p9"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.744529 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs7nr\" (UniqueName: \"kubernetes.io/projected/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-kube-api-access-hs7nr\") pod \"nova-cell1-db-create-4z8kz\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.744568 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-operator-scripts\") pod \"nova-cell1-db-create-4z8kz\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.747402 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-operator-scripts\") pod \"nova-cell1-db-create-4z8kz\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.777471 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs7nr\" (UniqueName: \"kubernetes.io/projected/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-kube-api-access-hs7nr\") pod \"nova-cell1-db-create-4z8kz\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.818522 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-aab6-account-create-update-4zgt4"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.819679 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.823706 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.843862 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-aab6-account-create-update-4zgt4"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.850179 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db48a3bd-546d-4f52-a9bc-340e03790730-operator-scripts\") pod \"nova-cell0-1e3e-account-create-update-z84p9\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.850279 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr8d9\" (UniqueName: \"kubernetes.io/projected/db48a3bd-546d-4f52-a9bc-340e03790730-kube-api-access-cr8d9\") pod \"nova-cell0-1e3e-account-create-update-z84p9\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.952975 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.956535 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db48a3bd-546d-4f52-a9bc-340e03790730-operator-scripts\") pod \"nova-cell0-1e3e-account-create-update-z84p9\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.956585 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb102798-6f2c-4cf4-b697-03cc94f9174a-operator-scripts\") pod \"nova-cell1-aab6-account-create-update-4zgt4\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.956615 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lwdw\" (UniqueName: \"kubernetes.io/projected/cb102798-6f2c-4cf4-b697-03cc94f9174a-kube-api-access-7lwdw\") pod \"nova-cell1-aab6-account-create-update-4zgt4\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.956650 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr8d9\" (UniqueName: \"kubernetes.io/projected/db48a3bd-546d-4f52-a9bc-340e03790730-kube-api-access-cr8d9\") pod \"nova-cell0-1e3e-account-create-update-z84p9\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.957588 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db48a3bd-546d-4f52-a9bc-340e03790730-operator-scripts\") pod \"nova-cell0-1e3e-account-create-update-z84p9\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.964570 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.990582 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr8d9\" (UniqueName: \"kubernetes.io/projected/db48a3bd-546d-4f52-a9bc-340e03790730-kube-api-access-cr8d9\") pod \"nova-cell0-1e3e-account-create-update-z84p9\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.038951 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.063925 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb102798-6f2c-4cf4-b697-03cc94f9174a-operator-scripts\") pod \"nova-cell1-aab6-account-create-update-4zgt4\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.064016 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lwdw\" (UniqueName: \"kubernetes.io/projected/cb102798-6f2c-4cf4-b697-03cc94f9174a-kube-api-access-7lwdw\") pod \"nova-cell1-aab6-account-create-update-4zgt4\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.069464 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb102798-6f2c-4cf4-b697-03cc94f9174a-operator-scripts\") pod \"nova-cell1-aab6-account-create-update-4zgt4\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.092801 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lwdw\" (UniqueName: \"kubernetes.io/projected/cb102798-6f2c-4cf4-b697-03cc94f9174a-kube-api-access-7lwdw\") pod \"nova-cell1-aab6-account-create-update-4zgt4\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.144417 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.165457 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerStarted","Data":"1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba"} Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.198565 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p847h"] Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.309226 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2gcsv"] Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.655330 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.664626 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-httpd" containerID="cri-o://82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b" gracePeriod=30 Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.663425 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-log" containerID="cri-o://5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19" gracePeriod=30 Jan 24 07:13:44 crc kubenswrapper[4675]: E0124 07:13:44.697608 4675 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ac4ed7_04e2_420b_b6cd_4021c5cd1b9f.slice/crio-ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196: Error finding container ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196: Status 404 returned error can't find the container with id ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196 Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.034359 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" path="/var/lib/kubelet/pods/d7b4aa87-c092-4624-bd65-c9393dd36098/volumes" Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.035236 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-47cc-account-create-update-qbjjs"] Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.058500 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1e3e-account-create-update-z84p9"] Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.098241 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4z8kz"] Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.182934 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p847h" event={"ID":"d4d4a29e-dbe1-4145-b0af-afa0c77172b9","Type":"ContainerStarted","Data":"ccdf210ec59856c481255445ba67000d722f7daf10b18be409b9884e5bed261a"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.182979 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p847h" event={"ID":"d4d4a29e-dbe1-4145-b0af-afa0c77172b9","Type":"ContainerStarted","Data":"15c2743490cbd27546d69a8d6fb2ef4e7870e6c547960bdf525801f45200d4c2"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.193026 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4z8kz" event={"ID":"9b6ffe68-4ebd-47e8-8b11-20050394e5b7","Type":"ContainerStarted","Data":"d941027a5ad5be885d89fdff0b43597296adafeb003615ecbc4f53f02190bd78"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.197909 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" event={"ID":"db48a3bd-546d-4f52-a9bc-340e03790730","Type":"ContainerStarted","Data":"d8fd9a81db61dcccc43a820e631e91369892f49d26e3b02c9f950de6edc3f9d7"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.201388 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2gcsv" event={"ID":"c962c5e1-a244-4690-935e-9a7b0d5fc7e4","Type":"ContainerStarted","Data":"dbf84230f864bc19464817aff5d36347fe8a661c6ce91661b718a8bbd234e6b5"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.201439 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2gcsv" event={"ID":"c962c5e1-a244-4690-935e-9a7b0d5fc7e4","Type":"ContainerStarted","Data":"bcc40f584741f9b08cd9119f4584722b1c051946412818cd64bd174bd95b9652"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.202887 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-47cc-account-create-update-qbjjs" event={"ID":"f8458b8a-6770-4e62-9848-55a9b142cb8c","Type":"ContainerStarted","Data":"8cef497a71f5e8c9e6023c3573d0db3b33448ea3ea9a25de79ae6e97e277f95f"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.211426 4675 generic.go:334] "Generic (PLEG): container finished" podID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerID="7f1a6675a950b42c9ecbccbf0a4fb33df3e31b81c67e7165b82b4f582a3574f1" exitCode=137 Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.211758 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-656ff794dd-jx8ld" event={"ID":"4b7e7730-0a42-48b0-bb7e-da95eb915126","Type":"ContainerDied","Data":"7f1a6675a950b42c9ecbccbf0a4fb33df3e31b81c67e7165b82b4f582a3574f1"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.215012 4675 generic.go:334] "Generic (PLEG): container finished" podID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerID="5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19" exitCode=143 Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.215057 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a50c14d-d518-492c-87d1-a194dc075c9f","Type":"ContainerDied","Data":"5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.222952 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-p847h" podStartSLOduration=2.222926918 podStartE2EDuration="2.222926918s" podCreationTimestamp="2026-01-24 07:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:45.1998157 +0000 UTC m=+1226.495920923" watchObservedRunningTime="2026-01-24 07:13:45.222926918 +0000 UTC m=+1226.519032131" Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.267401 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-2gcsv" podStartSLOduration=2.26737279 podStartE2EDuration="2.26737279s" podCreationTimestamp="2026-01-24 07:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:45.244121309 +0000 UTC m=+1226.540226532" watchObservedRunningTime="2026-01-24 07:13:45.26737279 +0000 UTC m=+1226.563478013" Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.290116 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-aab6-account-create-update-4zgt4"] Jan 24 07:13:45 crc kubenswrapper[4675]: E0124 07:13:45.303260 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b7e06f_b840_408c_b026_a086b975812f.slice/crio-conmon-8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7b4aa87_c092_4624_bd65_c9393dd36098.slice/crio-conmon-7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7b4aa87_c092_4624_bd65_c9393dd36098.slice/crio-7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b7e06f_b840_408c_b026_a086b975812f.slice/crio-0ad4338e6f939f6bda642b2d5397708669ef3b6004444834c598ae8f3b747800\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b7e06f_b840_408c_b026_a086b975812f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7b4aa87_c092_4624_bd65_c9393dd36098.slice/crio-5ad36d71e3b73e26c65c39abbe42993d524b7ec51d9571439c232b730197cdc0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b7e06f_b840_408c_b026_a086b975812f.slice/crio-8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a50c14d_d518_492c_87d1_a194dc075c9f.slice/crio-conmon-5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ac4ed7_04e2_420b_b6cd_4021c5cd1b9f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64489b46_b7cd_4c35_976d_c8397add424a.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:13:45 crc kubenswrapper[4675]: W0124 07:13:45.337382 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb102798_6f2c_4cf4_b697_03cc94f9174a.slice/crio-32e95dec4f315af321abacd7d04a4ba3d82e40b4cceddf8fa638fdfe9564b7a4 WatchSource:0}: Error finding container 32e95dec4f315af321abacd7d04a4ba3d82e40b4cceddf8fa638fdfe9564b7a4: Status 404 returned error can't find the container with id 32e95dec4f315af321abacd7d04a4ba3d82e40b4cceddf8fa638fdfe9564b7a4 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.224114 4675 generic.go:334] "Generic (PLEG): container finished" podID="db48a3bd-546d-4f52-a9bc-340e03790730" containerID="7ee7b6faa999fda3d6ec97508bbaca0406687b589e89517642e51b8d024a1a97" exitCode=0 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.224291 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" event={"ID":"db48a3bd-546d-4f52-a9bc-340e03790730","Type":"ContainerDied","Data":"7ee7b6faa999fda3d6ec97508bbaca0406687b589e89517642e51b8d024a1a97"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.226228 4675 generic.go:334] "Generic (PLEG): container finished" podID="c962c5e1-a244-4690-935e-9a7b0d5fc7e4" containerID="dbf84230f864bc19464817aff5d36347fe8a661c6ce91661b718a8bbd234e6b5" exitCode=0 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.226326 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2gcsv" event={"ID":"c962c5e1-a244-4690-935e-9a7b0d5fc7e4","Type":"ContainerDied","Data":"dbf84230f864bc19464817aff5d36347fe8a661c6ce91661b718a8bbd234e6b5"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.228349 4675 generic.go:334] "Generic (PLEG): container finished" podID="f8458b8a-6770-4e62-9848-55a9b142cb8c" containerID="13922ccdb386ccdef5ac3f7ca81cf15c2217528fedc1c377893db26450c6489d" exitCode=0 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.228405 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-47cc-account-create-update-qbjjs" event={"ID":"f8458b8a-6770-4e62-9848-55a9b142cb8c","Type":"ContainerDied","Data":"13922ccdb386ccdef5ac3f7ca81cf15c2217528fedc1c377893db26450c6489d"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.231038 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-656ff794dd-jx8ld" event={"ID":"4b7e7730-0a42-48b0-bb7e-da95eb915126","Type":"ContainerStarted","Data":"4364811164cf790d506ed582ab48a71d2c08ea8c39feab37977f45c719a19230"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.233680 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerStarted","Data":"d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.233844 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-central-agent" containerID="cri-o://87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c" gracePeriod=30 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.234074 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.234118 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="proxy-httpd" containerID="cri-o://d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5" gracePeriod=30 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.234160 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="sg-core" containerID="cri-o://1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba" gracePeriod=30 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.234193 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-notification-agent" containerID="cri-o://e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d" gracePeriod=30 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.241177 4675 generic.go:334] "Generic (PLEG): container finished" podID="d4d4a29e-dbe1-4145-b0af-afa0c77172b9" containerID="ccdf210ec59856c481255445ba67000d722f7daf10b18be409b9884e5bed261a" exitCode=0 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.241608 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p847h" event={"ID":"d4d4a29e-dbe1-4145-b0af-afa0c77172b9","Type":"ContainerDied","Data":"ccdf210ec59856c481255445ba67000d722f7daf10b18be409b9884e5bed261a"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.254647 4675 generic.go:334] "Generic (PLEG): container finished" podID="9b6ffe68-4ebd-47e8-8b11-20050394e5b7" containerID="e024578d84cf52e29f779949e2955f4eac1d56a123af391ad810ea1674a31648" exitCode=0 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.254736 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4z8kz" event={"ID":"9b6ffe68-4ebd-47e8-8b11-20050394e5b7","Type":"ContainerDied","Data":"e024578d84cf52e29f779949e2955f4eac1d56a123af391ad810ea1674a31648"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.257948 4675 generic.go:334] "Generic (PLEG): container finished" podID="cb102798-6f2c-4cf4-b697-03cc94f9174a" containerID="92a6b4b87b9b2ef26a79f73c81ecbeb36fe6ccb8b0e511ab2d00e52dda5c10ce" exitCode=0 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.258053 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" event={"ID":"cb102798-6f2c-4cf4-b697-03cc94f9174a","Type":"ContainerDied","Data":"92a6b4b87b9b2ef26a79f73c81ecbeb36fe6ccb8b0e511ab2d00e52dda5c10ce"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.258129 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" event={"ID":"cb102798-6f2c-4cf4-b697-03cc94f9174a","Type":"ContainerStarted","Data":"32e95dec4f315af321abacd7d04a4ba3d82e40b4cceddf8fa638fdfe9564b7a4"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.298376 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.119006605 podStartE2EDuration="7.298362396s" podCreationTimestamp="2026-01-24 07:13:39 +0000 UTC" firstStartedPulling="2026-01-24 07:13:40.103802059 +0000 UTC m=+1221.399907282" lastFinishedPulling="2026-01-24 07:13:45.28315785 +0000 UTC m=+1226.579263073" observedRunningTime="2026-01-24 07:13:46.296008939 +0000 UTC m=+1227.592114162" watchObservedRunningTime="2026-01-24 07:13:46.298362396 +0000 UTC m=+1227.594467619" Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.885559 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.885950 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-log" containerID="cri-o://a03aacf34897e1379e2ae1229f088c65a86ab22a278aecd0a4ccc4cba6bdd994" gracePeriod=30 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.886169 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-httpd" containerID="cri-o://92544b6eaa03a277318c5550337cdf4977e7b316dcc14eeae1de3a44d092ab8e" gracePeriod=30 Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.033373 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.038150 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.271405 4675 generic.go:334] "Generic (PLEG): container finished" podID="95652bba-0800-475e-9f2f-20e64195d523" containerID="a03aacf34897e1379e2ae1229f088c65a86ab22a278aecd0a4ccc4cba6bdd994" exitCode=143 Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.271486 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95652bba-0800-475e-9f2f-20e64195d523","Type":"ContainerDied","Data":"a03aacf34897e1379e2ae1229f088c65a86ab22a278aecd0a4ccc4cba6bdd994"} Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.274836 4675 generic.go:334] "Generic (PLEG): container finished" podID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerID="d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5" exitCode=0 Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.274860 4675 generic.go:334] "Generic (PLEG): container finished" podID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerID="1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba" exitCode=2 Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.274868 4675 generic.go:334] "Generic (PLEG): container finished" podID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerID="e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d" exitCode=0 Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.275696 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerDied","Data":"d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5"} Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.275821 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerDied","Data":"1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba"} Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.275840 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerDied","Data":"e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d"} Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.833045 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.892099 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spbfq\" (UniqueName: \"kubernetes.io/projected/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-kube-api-access-spbfq\") pod \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.904579 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-kube-api-access-spbfq" (OuterVolumeSpecName: "kube-api-access-spbfq") pod "d4d4a29e-dbe1-4145-b0af-afa0c77172b9" (UID: "d4d4a29e-dbe1-4145-b0af-afa0c77172b9"). InnerVolumeSpecName "kube-api-access-spbfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.998098 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-operator-scripts\") pod \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.002795 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4d4a29e-dbe1-4145-b0af-afa0c77172b9" (UID: "d4d4a29e-dbe1-4145-b0af-afa0c77172b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.004372 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spbfq\" (UniqueName: \"kubernetes.io/projected/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-kube-api-access-spbfq\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.106061 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.272288 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.325256 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4z8kz" event={"ID":"9b6ffe68-4ebd-47e8-8b11-20050394e5b7","Type":"ContainerDied","Data":"d941027a5ad5be885d89fdff0b43597296adafeb003615ecbc4f53f02190bd78"} Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.325297 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d941027a5ad5be885d89fdff0b43597296adafeb003615ecbc4f53f02190bd78" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.325360 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.329296 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p847h" event={"ID":"d4d4a29e-dbe1-4145-b0af-afa0c77172b9","Type":"ContainerDied","Data":"15c2743490cbd27546d69a8d6fb2ef4e7870e6c547960bdf525801f45200d4c2"} Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.329334 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15c2743490cbd27546d69a8d6fb2ef4e7870e6c547960bdf525801f45200d4c2" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.329380 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.363587 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.376756 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.397925 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.406372 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.414901 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs7nr\" (UniqueName: \"kubernetes.io/projected/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-kube-api-access-hs7nr\") pod \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.415174 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-operator-scripts\") pod \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.417262 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b6ffe68-4ebd-47e8-8b11-20050394e5b7" (UID: "9b6ffe68-4ebd-47e8-8b11-20050394e5b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.428536 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-kube-api-access-hs7nr" (OuterVolumeSpecName: "kube-api-access-hs7nr") pod "9b6ffe68-4ebd-47e8-8b11-20050394e5b7" (UID: "9b6ffe68-4ebd-47e8-8b11-20050394e5b7"). InnerVolumeSpecName "kube-api-access-hs7nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.518133 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr8d9\" (UniqueName: \"kubernetes.io/projected/db48a3bd-546d-4f52-a9bc-340e03790730-kube-api-access-cr8d9\") pod \"db48a3bd-546d-4f52-a9bc-340e03790730\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.518475 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb102798-6f2c-4cf4-b697-03cc94f9174a-operator-scripts\") pod \"cb102798-6f2c-4cf4-b697-03cc94f9174a\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.518643 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nmq8\" (UniqueName: \"kubernetes.io/projected/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-kube-api-access-7nmq8\") pod \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.519432 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8458b8a-6770-4e62-9848-55a9b142cb8c-operator-scripts\") pod \"f8458b8a-6770-4e62-9848-55a9b142cb8c\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.520032 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lwdw\" (UniqueName: \"kubernetes.io/projected/cb102798-6f2c-4cf4-b697-03cc94f9174a-kube-api-access-7lwdw\") pod \"cb102798-6f2c-4cf4-b697-03cc94f9174a\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.520169 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg679\" (UniqueName: \"kubernetes.io/projected/f8458b8a-6770-4e62-9848-55a9b142cb8c-kube-api-access-bg679\") pod \"f8458b8a-6770-4e62-9848-55a9b142cb8c\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.520298 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db48a3bd-546d-4f52-a9bc-340e03790730-operator-scripts\") pod \"db48a3bd-546d-4f52-a9bc-340e03790730\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.520670 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-operator-scripts\") pod \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.519507 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb102798-6f2c-4cf4-b697-03cc94f9174a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb102798-6f2c-4cf4-b697-03cc94f9174a" (UID: "cb102798-6f2c-4cf4-b697-03cc94f9174a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.519949 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8458b8a-6770-4e62-9848-55a9b142cb8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8458b8a-6770-4e62-9848-55a9b142cb8c" (UID: "f8458b8a-6770-4e62-9848-55a9b142cb8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.522509 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db48a3bd-546d-4f52-a9bc-340e03790730-kube-api-access-cr8d9" (OuterVolumeSpecName: "kube-api-access-cr8d9") pod "db48a3bd-546d-4f52-a9bc-340e03790730" (UID: "db48a3bd-546d-4f52-a9bc-340e03790730"). InnerVolumeSpecName "kube-api-access-cr8d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.523027 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db48a3bd-546d-4f52-a9bc-340e03790730-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db48a3bd-546d-4f52-a9bc-340e03790730" (UID: "db48a3bd-546d-4f52-a9bc-340e03790730"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.523605 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c962c5e1-a244-4690-935e-9a7b0d5fc7e4" (UID: "c962c5e1-a244-4690-935e-9a7b0d5fc7e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.525569 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8458b8a-6770-4e62-9848-55a9b142cb8c-kube-api-access-bg679" (OuterVolumeSpecName: "kube-api-access-bg679") pod "f8458b8a-6770-4e62-9848-55a9b142cb8c" (UID: "f8458b8a-6770-4e62-9848-55a9b142cb8c"). InnerVolumeSpecName "kube-api-access-bg679". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.525825 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb102798-6f2c-4cf4-b697-03cc94f9174a-kube-api-access-7lwdw" (OuterVolumeSpecName: "kube-api-access-7lwdw") pod "cb102798-6f2c-4cf4-b697-03cc94f9174a" (UID: "cb102798-6f2c-4cf4-b697-03cc94f9174a"). InnerVolumeSpecName "kube-api-access-7lwdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.525969 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-kube-api-access-7nmq8" (OuterVolumeSpecName: "kube-api-access-7nmq8") pod "c962c5e1-a244-4690-935e-9a7b0d5fc7e4" (UID: "c962c5e1-a244-4690-935e-9a7b0d5fc7e4"). InnerVolumeSpecName "kube-api-access-7nmq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.535972 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536189 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb102798-6f2c-4cf4-b697-03cc94f9174a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536296 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nmq8\" (UniqueName: \"kubernetes.io/projected/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-kube-api-access-7nmq8\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536372 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8458b8a-6770-4e62-9848-55a9b142cb8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536450 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lwdw\" (UniqueName: \"kubernetes.io/projected/cb102798-6f2c-4cf4-b697-03cc94f9174a-kube-api-access-7lwdw\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536537 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs7nr\" (UniqueName: \"kubernetes.io/projected/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-kube-api-access-hs7nr\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536611 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg679\" (UniqueName: \"kubernetes.io/projected/f8458b8a-6770-4e62-9848-55a9b142cb8c-kube-api-access-bg679\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536688 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db48a3bd-546d-4f52-a9bc-340e03790730-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536770 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536843 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr8d9\" (UniqueName: \"kubernetes.io/projected/db48a3bd-546d-4f52-a9bc-340e03790730-kube-api-access-cr8d9\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.055202 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.157998 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-httpd-run\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158047 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-config-data\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158176 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-combined-ca-bundle\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158246 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158302 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-public-tls-certs\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158335 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r2w7\" (UniqueName: \"kubernetes.io/projected/5a50c14d-d518-492c-87d1-a194dc075c9f-kube-api-access-4r2w7\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158372 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-scripts\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158400 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-logs\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158682 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.159395 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.160926 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-logs" (OuterVolumeSpecName: "logs") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.213086 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-scripts" (OuterVolumeSpecName: "scripts") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.221562 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.221824 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a50c14d-d518-492c-87d1-a194dc075c9f-kube-api-access-4r2w7" (OuterVolumeSpecName: "kube-api-access-4r2w7") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "kube-api-access-4r2w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.261322 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.261350 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r2w7\" (UniqueName: \"kubernetes.io/projected/5a50c14d-d518-492c-87d1-a194dc075c9f-kube-api-access-4r2w7\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.261360 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.261369 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.280926 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-config-data" (OuterVolumeSpecName: "config-data") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.288998 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.319574 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.341390 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.351811 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-47cc-account-create-update-qbjjs" event={"ID":"f8458b8a-6770-4e62-9848-55a9b142cb8c","Type":"ContainerDied","Data":"8cef497a71f5e8c9e6023c3573d0db3b33448ea3ea9a25de79ae6e97e277f95f"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.351873 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cef497a71f5e8c9e6023c3573d0db3b33448ea3ea9a25de79ae6e97e277f95f" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.352002 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.369473 4675 generic.go:334] "Generic (PLEG): container finished" podID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerID="82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b" exitCode=0 Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.369556 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a50c14d-d518-492c-87d1-a194dc075c9f","Type":"ContainerDied","Data":"82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.369585 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a50c14d-d518-492c-87d1-a194dc075c9f","Type":"ContainerDied","Data":"3f56668cc86ccffe01283b05d57ef7e538fda6369f55106b029c24100a089c58"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.369602 4675 scope.go:117] "RemoveContainer" containerID="82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.369740 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.370612 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.370659 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.370670 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.377180 4675 generic.go:334] "Generic (PLEG): container finished" podID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerID="87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c" exitCode=0 Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.377235 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerDied","Data":"87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.377256 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerDied","Data":"0c61230a10f189c874d0a49db9b6e2672e6d430929f425e547950a4f18245bc4"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.377311 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.384509 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.384884 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" event={"ID":"cb102798-6f2c-4cf4-b697-03cc94f9174a","Type":"ContainerDied","Data":"32e95dec4f315af321abacd7d04a4ba3d82e40b4cceddf8fa638fdfe9564b7a4"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.384950 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32e95dec4f315af321abacd7d04a4ba3d82e40b4cceddf8fa638fdfe9564b7a4" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.393008 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.397540 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" event={"ID":"db48a3bd-546d-4f52-a9bc-340e03790730","Type":"ContainerDied","Data":"d8fd9a81db61dcccc43a820e631e91369892f49d26e3b02c9f950de6edc3f9d7"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.397595 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8fd9a81db61dcccc43a820e631e91369892f49d26e3b02c9f950de6edc3f9d7" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.397705 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.427444 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2gcsv" event={"ID":"c962c5e1-a244-4690-935e-9a7b0d5fc7e4","Type":"ContainerDied","Data":"bcc40f584741f9b08cd9119f4584722b1c051946412818cd64bd174bd95b9652"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.427482 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcc40f584741f9b08cd9119f4584722b1c051946412818cd64bd174bd95b9652" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.427557 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.452210 4675 scope.go:117] "RemoveContainer" containerID="5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.475833 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-scripts\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.475928 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-log-httpd\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.475971 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-sg-core-conf-yaml\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476011 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-run-httpd\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476055 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-combined-ca-bundle\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476083 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-config-data\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476293 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47792\" (UniqueName: \"kubernetes.io/projected/918eda7b-6eff-4fb5-90d6-1b43a18787fb-kube-api-access-47792\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476557 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476835 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476850 4675 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.477001 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.493960 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-scripts" (OuterVolumeSpecName: "scripts") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.498993 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918eda7b-6eff-4fb5-90d6-1b43a18787fb-kube-api-access-47792" (OuterVolumeSpecName: "kube-api-access-47792") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "kube-api-access-47792". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.502130 4675 scope.go:117] "RemoveContainer" containerID="82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.502769 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b\": container with ID starting with 82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b not found: ID does not exist" containerID="82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.502795 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b"} err="failed to get container status \"82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b\": rpc error: code = NotFound desc = could not find container \"82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b\": container with ID starting with 82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b not found: ID does not exist" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.502815 4675 scope.go:117] "RemoveContainer" containerID="5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.506767 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19\": container with ID starting with 5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19 not found: ID does not exist" containerID="5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.506843 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19"} err="failed to get container status \"5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19\": rpc error: code = NotFound desc = could not find container \"5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19\": container with ID starting with 5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19 not found: ID does not exist" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.506882 4675 scope.go:117] "RemoveContainer" containerID="d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.537231 4675 scope.go:117] "RemoveContainer" containerID="1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.541272 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.567511 4675 scope.go:117] "RemoveContainer" containerID="e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.578153 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.581488 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.581511 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.581521 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47792\" (UniqueName: \"kubernetes.io/projected/918eda7b-6eff-4fb5-90d6-1b43a18787fb-kube-api-access-47792\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.581536 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.581545 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.593583 4675 scope.go:117] "RemoveContainer" containerID="87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.611519 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-config-data" (OuterVolumeSpecName: "config-data") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.636435 4675 scope.go:117] "RemoveContainer" containerID="d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.636797 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5\": container with ID starting with d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5 not found: ID does not exist" containerID="d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.636830 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5"} err="failed to get container status \"d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5\": rpc error: code = NotFound desc = could not find container \"d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5\": container with ID starting with d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5 not found: ID does not exist" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.636857 4675 scope.go:117] "RemoveContainer" containerID="1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.637501 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba\": container with ID starting with 1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba not found: ID does not exist" containerID="1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.637522 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba"} err="failed to get container status \"1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba\": rpc error: code = NotFound desc = could not find container \"1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba\": container with ID starting with 1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba not found: ID does not exist" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.637536 4675 scope.go:117] "RemoveContainer" containerID="e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.637989 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d\": container with ID starting with e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d not found: ID does not exist" containerID="e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.638011 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d"} err="failed to get container status \"e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d\": rpc error: code = NotFound desc = could not find container \"e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d\": container with ID starting with e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d not found: ID does not exist" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.638025 4675 scope.go:117] "RemoveContainer" containerID="87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.638347 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c\": container with ID starting with 87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c not found: ID does not exist" containerID="87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.638375 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c"} err="failed to get container status \"87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c\": rpc error: code = NotFound desc = could not find container \"87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c\": container with ID starting with 87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c not found: ID does not exist" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.702743 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.737216 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.745064 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.768173 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.783303 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.783992 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-httpd" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784039 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-httpd" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784057 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-notification-agent" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784063 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-notification-agent" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784080 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b6ffe68-4ebd-47e8-8b11-20050394e5b7" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784087 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b6ffe68-4ebd-47e8-8b11-20050394e5b7" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784099 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c962c5e1-a244-4690-935e-9a7b0d5fc7e4" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784105 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c962c5e1-a244-4690-935e-9a7b0d5fc7e4" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784121 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb102798-6f2c-4cf4-b697-03cc94f9174a" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784127 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb102798-6f2c-4cf4-b697-03cc94f9174a" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784141 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-log" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784147 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-log" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784158 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="proxy-httpd" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784166 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="proxy-httpd" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784179 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-central-agent" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784185 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-central-agent" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784195 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="sg-core" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784202 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="sg-core" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784222 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db48a3bd-546d-4f52-a9bc-340e03790730" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784228 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="db48a3bd-546d-4f52-a9bc-340e03790730" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784245 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8458b8a-6770-4e62-9848-55a9b142cb8c" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784251 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8458b8a-6770-4e62-9848-55a9b142cb8c" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784266 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d4a29e-dbe1-4145-b0af-afa0c77172b9" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784272 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d4a29e-dbe1-4145-b0af-afa0c77172b9" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784459 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-log" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784480 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb102798-6f2c-4cf4-b697-03cc94f9174a" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784492 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8458b8a-6770-4e62-9848-55a9b142cb8c" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784499 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d4a29e-dbe1-4145-b0af-afa0c77172b9" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784510 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-central-agent" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784521 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c962c5e1-a244-4690-935e-9a7b0d5fc7e4" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784533 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-notification-agent" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784546 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="db48a3bd-546d-4f52-a9bc-340e03790730" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784559 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-httpd" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784569 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="proxy-httpd" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784577 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="sg-core" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784587 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b6ffe68-4ebd-47e8-8b11-20050394e5b7" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.785812 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.790739 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.791077 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.793373 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.809173 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.840244 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.842616 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.845585 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.845780 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.849661 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907170 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907226 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907245 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907316 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907352 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-logs\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907379 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7898\" (UniqueName: \"kubernetes.io/projected/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-kube-api-access-w7898\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907424 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907443 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.009760 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.009823 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.009884 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-run-httpd\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.009908 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-scripts\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010463 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-config-data\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010612 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010751 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010827 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010860 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010917 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-log-httpd\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010971 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.011012 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpsh8\" (UniqueName: \"kubernetes.io/projected/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-kube-api-access-zpsh8\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.011252 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-logs\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.011296 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.011318 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7898\" (UniqueName: \"kubernetes.io/projected/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-kube-api-access-w7898\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.011440 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.012174 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-logs\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.012912 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.014260 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.019188 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.022566 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.022933 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.036178 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7898\" (UniqueName: \"kubernetes.io/projected/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-kube-api-access-w7898\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.053582 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.104319 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.111556 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9292/healthcheck\": read tcp 10.217.0.2:60298->10.217.0.158:9292: read: connection reset by peer" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112013 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.158:9292/healthcheck\": read tcp 10.217.0.2:60284->10.217.0.158:9292: read: connection reset by peer" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112684 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpsh8\" (UniqueName: \"kubernetes.io/projected/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-kube-api-access-zpsh8\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112757 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112848 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-run-httpd\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112868 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-scripts\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112930 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-config-data\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112960 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.113023 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-log-httpd\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.114083 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-log-httpd\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.114393 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-run-httpd\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.117462 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.121386 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-config-data\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.132800 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-scripts\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.144948 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.158951 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpsh8\" (UniqueName: \"kubernetes.io/projected/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-kube-api-access-zpsh8\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.170616 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.441264 4675 generic.go:334] "Generic (PLEG): container finished" podID="95652bba-0800-475e-9f2f-20e64195d523" containerID="92544b6eaa03a277318c5550337cdf4977e7b316dcc14eeae1de3a44d092ab8e" exitCode=0 Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.441332 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95652bba-0800-475e-9f2f-20e64195d523","Type":"ContainerDied","Data":"92544b6eaa03a277318c5550337cdf4977e7b316dcc14eeae1de3a44d092ab8e"} Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.884168 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:13:50 crc kubenswrapper[4675]: W0124 07:13:50.888303 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0a8fdf4_03fc_4962_8792_6f129d2b00e4.slice/crio-41042bd618c45b8d1cc463ed6317524093a7dcf710eafaae00febb6defeb756c WatchSource:0}: Error finding container 41042bd618c45b8d1cc463ed6317524093a7dcf710eafaae00febb6defeb756c: Status 404 returned error can't find the container with id 41042bd618c45b8d1cc463ed6317524093a7dcf710eafaae00febb6defeb756c Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.931559 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.957035 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" path="/var/lib/kubelet/pods/5a50c14d-d518-492c-87d1-a194dc075c9f/volumes" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.958326 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" path="/var/lib/kubelet/pods/918eda7b-6eff-4fb5-90d6-1b43a18787fb/volumes" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.008476 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.044577 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.045706 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2cht\" (UniqueName: \"kubernetes.io/projected/95652bba-0800-475e-9f2f-20e64195d523-kube-api-access-n2cht\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.045808 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-httpd-run\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.045854 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-config-data\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.045897 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-scripts\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.045987 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-logs\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.046072 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-combined-ca-bundle\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.046114 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-internal-tls-certs\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.047183 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-logs" (OuterVolumeSpecName: "logs") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.047287 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.058054 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-scripts" (OuterVolumeSpecName: "scripts") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.058242 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95652bba-0800-475e-9f2f-20e64195d523-kube-api-access-n2cht" (OuterVolumeSpecName: "kube-api-access-n2cht") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "kube-api-access-n2cht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.063461 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.113949 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.131616 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-config-data" (OuterVolumeSpecName: "config-data") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.141967 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149795 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149835 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149859 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149870 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2cht\" (UniqueName: \"kubernetes.io/projected/95652bba-0800-475e-9f2f-20e64195d523-kube-api-access-n2cht\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149880 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149891 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149908 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.173104 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.188063 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.252927 4675 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.252985 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.459560 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0a8fdf4-03fc-4962-8792-6f129d2b00e4","Type":"ContainerStarted","Data":"41042bd618c45b8d1cc463ed6317524093a7dcf710eafaae00febb6defeb756c"} Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.460762 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerStarted","Data":"81db948dee5fd531f128023fa7c1f1df374b7bed408e07fff63892479d70b1a7"} Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.463038 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95652bba-0800-475e-9f2f-20e64195d523","Type":"ContainerDied","Data":"e4fd29804bf1cbcfbb72dce66fcc9bef4e155c732e7dbfc8e6239baf96486755"} Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.463137 4675 scope.go:117] "RemoveContainer" containerID="92544b6eaa03a277318c5550337cdf4977e7b316dcc14eeae1de3a44d092ab8e" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.463084 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.500987 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.523363 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.538701 4675 scope.go:117] "RemoveContainer" containerID="a03aacf34897e1379e2ae1229f088c65a86ab22a278aecd0a4ccc4cba6bdd994" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.545166 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:13:51 crc kubenswrapper[4675]: E0124 07:13:51.545730 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-log" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.545753 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-log" Jan 24 07:13:51 crc kubenswrapper[4675]: E0124 07:13:51.545778 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-httpd" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.545789 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-httpd" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.545993 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-log" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.546026 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-httpd" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.547059 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.550415 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.550651 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.573072 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.661925 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fx5j\" (UniqueName: \"kubernetes.io/projected/d61eafc8-f960-4335-8d26-2d47e8c7c039-kube-api-access-9fx5j\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.661992 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.662061 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61eafc8-f960-4335-8d26-2d47e8c7c039-logs\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.662085 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.662105 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.662130 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.662165 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d61eafc8-f960-4335-8d26-2d47e8c7c039-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.662421 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764080 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764177 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61eafc8-f960-4335-8d26-2d47e8c7c039-logs\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764201 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764222 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764244 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764265 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d61eafc8-f960-4335-8d26-2d47e8c7c039-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764301 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764365 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fx5j\" (UniqueName: \"kubernetes.io/projected/d61eafc8-f960-4335-8d26-2d47e8c7c039-kube-api-access-9fx5j\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764983 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.765077 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d61eafc8-f960-4335-8d26-2d47e8c7c039-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.765671 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61eafc8-f960-4335-8d26-2d47e8c7c039-logs\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.772048 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.773042 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.779176 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.784051 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.789771 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fx5j\" (UniqueName: \"kubernetes.io/projected/d61eafc8-f960-4335-8d26-2d47e8c7c039-kube-api-access-9fx5j\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.810994 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.883593 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:52 crc kubenswrapper[4675]: I0124 07:13:52.481234 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0a8fdf4-03fc-4962-8792-6f129d2b00e4","Type":"ContainerStarted","Data":"1440b0446c305524a6895876b8115f51034122782b72b9e9e4f478f123004c35"} Jan 24 07:13:52 crc kubenswrapper[4675]: I0124 07:13:52.484501 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerStarted","Data":"fcf737f104b20f78a351793d0ea94c36f05fb857e02ce7c2d27ccc8c4ddfb7bb"} Jan 24 07:13:52 crc kubenswrapper[4675]: I0124 07:13:52.523262 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:13:52 crc kubenswrapper[4675]: I0124 07:13:52.963790 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95652bba-0800-475e-9f2f-20e64195d523" path="/var/lib/kubelet/pods/95652bba-0800-475e-9f2f-20e64195d523/volumes" Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.498130 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d61eafc8-f960-4335-8d26-2d47e8c7c039","Type":"ContainerStarted","Data":"08f9eeaa23d0e7e1926b3bc91165e7fcb8440479b8a50e9af109c3b575c2b3c7"} Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.498514 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d61eafc8-f960-4335-8d26-2d47e8c7c039","Type":"ContainerStarted","Data":"b542c766f3c42910335f6149d7bde13e61206d90785b0b2a1dd900cf727a5605"} Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.511073 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0a8fdf4-03fc-4962-8792-6f129d2b00e4","Type":"ContainerStarted","Data":"da325debef05abf0da1ee626fa35253467ba9e4e7d733e9ee2a3883834531783"} Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.521394 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerStarted","Data":"f76537d2c23f9f75a2b333632e88cd2ff39df94b4f83529627b385a01b6c09da"} Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.548374 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.548360967 podStartE2EDuration="4.548360967s" podCreationTimestamp="2026-01-24 07:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:53.535593578 +0000 UTC m=+1234.831698801" watchObservedRunningTime="2026-01-24 07:13:53.548360967 +0000 UTC m=+1234.844466190" Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.920827 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvg8g"] Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.921984 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.928566 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.928579 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-42gcl" Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.928887 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.968252 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvg8g"] Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.039408 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-config-data\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.039453 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-scripts\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.039484 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.039665 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rn4s\" (UniqueName: \"kubernetes.io/projected/827f33c6-ea9f-4312-9533-e952a218f464-kube-api-access-5rn4s\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.142470 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-config-data\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.142964 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-scripts\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.143108 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.143332 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rn4s\" (UniqueName: \"kubernetes.io/projected/827f33c6-ea9f-4312-9533-e952a218f464-kube-api-access-5rn4s\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.165842 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-config-data\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.166895 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-scripts\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.172066 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rn4s\" (UniqueName: \"kubernetes.io/projected/827f33c6-ea9f-4312-9533-e952a218f464-kube-api-access-5rn4s\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.177255 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.239862 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.312346 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.312679 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.547256 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d61eafc8-f960-4335-8d26-2d47e8c7c039","Type":"ContainerStarted","Data":"1e118acaa4b68aa576d4e0cb5719e0c7aa2309b01c312be3a57455a035969ad2"} Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.556575 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerStarted","Data":"533cfcafa7099ae8ad07bfe72a3c3fa2a275562290c2a449a880e413dbbff37c"} Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.577521 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.577498221 podStartE2EDuration="3.577498221s" podCreationTimestamp="2026-01-24 07:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:54.563597064 +0000 UTC m=+1235.859702287" watchObservedRunningTime="2026-01-24 07:13:54.577498221 +0000 UTC m=+1235.873603444" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.768231 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvg8g"] Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.598508 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerStarted","Data":"63a5ceae15c00468b72f0e7ce7ce829a62036f711da9d244e81a84d9b9ccde32"} Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.599028 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-central-agent" containerID="cri-o://fcf737f104b20f78a351793d0ea94c36f05fb857e02ce7c2d27ccc8c4ddfb7bb" gracePeriod=30 Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.599370 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.599678 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="proxy-httpd" containerID="cri-o://63a5ceae15c00468b72f0e7ce7ce829a62036f711da9d244e81a84d9b9ccde32" gracePeriod=30 Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.599809 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="sg-core" containerID="cri-o://533cfcafa7099ae8ad07bfe72a3c3fa2a275562290c2a449a880e413dbbff37c" gracePeriod=30 Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.599848 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-notification-agent" containerID="cri-o://f76537d2c23f9f75a2b333632e88cd2ff39df94b4f83529627b385a01b6c09da" gracePeriod=30 Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.609162 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" event={"ID":"827f33c6-ea9f-4312-9533-e952a218f464","Type":"ContainerStarted","Data":"1f30006b6a3b95bebf5a27838f6aeb1a6be45660ab015b24f0aa23631999e023"} Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.623974 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.458562622 podStartE2EDuration="6.623956294s" podCreationTimestamp="2026-01-24 07:13:49 +0000 UTC" firstStartedPulling="2026-01-24 07:13:50.980107496 +0000 UTC m=+1232.276212719" lastFinishedPulling="2026-01-24 07:13:55.145501168 +0000 UTC m=+1236.441606391" observedRunningTime="2026-01-24 07:13:55.621656629 +0000 UTC m=+1236.917761852" watchObservedRunningTime="2026-01-24 07:13:55.623956294 +0000 UTC m=+1236.920061527" Jan 24 07:13:56 crc kubenswrapper[4675]: I0124 07:13:56.620621 4675 generic.go:334] "Generic (PLEG): container finished" podID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerID="63a5ceae15c00468b72f0e7ce7ce829a62036f711da9d244e81a84d9b9ccde32" exitCode=0 Jan 24 07:13:56 crc kubenswrapper[4675]: I0124 07:13:56.620939 4675 generic.go:334] "Generic (PLEG): container finished" podID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerID="533cfcafa7099ae8ad07bfe72a3c3fa2a275562290c2a449a880e413dbbff37c" exitCode=2 Jan 24 07:13:56 crc kubenswrapper[4675]: I0124 07:13:56.620947 4675 generic.go:334] "Generic (PLEG): container finished" podID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerID="f76537d2c23f9f75a2b333632e88cd2ff39df94b4f83529627b385a01b6c09da" exitCode=0 Jan 24 07:13:56 crc kubenswrapper[4675]: I0124 07:13:56.620828 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerDied","Data":"63a5ceae15c00468b72f0e7ce7ce829a62036f711da9d244e81a84d9b9ccde32"} Jan 24 07:13:56 crc kubenswrapper[4675]: I0124 07:13:56.620976 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerDied","Data":"533cfcafa7099ae8ad07bfe72a3c3fa2a275562290c2a449a880e413dbbff37c"} Jan 24 07:13:56 crc kubenswrapper[4675]: I0124 07:13:56.620986 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerDied","Data":"f76537d2c23f9f75a2b333632e88cd2ff39df94b4f83529627b385a01b6c09da"} Jan 24 07:14:00 crc kubenswrapper[4675]: I0124 07:14:00.108363 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 07:14:00 crc kubenswrapper[4675]: I0124 07:14:00.109095 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 07:14:00 crc kubenswrapper[4675]: I0124 07:14:00.152056 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 07:14:00 crc kubenswrapper[4675]: I0124 07:14:00.165675 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 07:14:00 crc kubenswrapper[4675]: I0124 07:14:00.664285 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 07:14:00 crc kubenswrapper[4675]: I0124 07:14:00.664475 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 07:14:01 crc kubenswrapper[4675]: I0124 07:14:01.902591 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:01 crc kubenswrapper[4675]: I0124 07:14:01.903088 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:01 crc kubenswrapper[4675]: I0124 07:14:01.942286 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:01 crc kubenswrapper[4675]: I0124 07:14:01.979951 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:02 crc kubenswrapper[4675]: I0124 07:14:02.679689 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:14:02 crc kubenswrapper[4675]: I0124 07:14:02.679733 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:14:02 crc kubenswrapper[4675]: I0124 07:14:02.681002 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:02 crc kubenswrapper[4675]: I0124 07:14:02.681060 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:03 crc kubenswrapper[4675]: I0124 07:14:03.229700 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 07:14:03 crc kubenswrapper[4675]: I0124 07:14:03.599424 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 07:14:03 crc kubenswrapper[4675]: I0124 07:14:03.718452 4675 generic.go:334] "Generic (PLEG): container finished" podID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerID="fcf737f104b20f78a351793d0ea94c36f05fb857e02ce7c2d27ccc8c4ddfb7bb" exitCode=0 Jan 24 07:14:03 crc kubenswrapper[4675]: I0124 07:14:03.722406 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerDied","Data":"fcf737f104b20f78a351793d0ea94c36f05fb857e02ce7c2d27ccc8c4ddfb7bb"} Jan 24 07:14:04 crc kubenswrapper[4675]: I0124 07:14:04.312655 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-656ff794dd-jx8ld" podUID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 24 07:14:05 crc kubenswrapper[4675]: I0124 07:14:05.752008 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:05 crc kubenswrapper[4675]: I0124 07:14:05.752291 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:14:06 crc kubenswrapper[4675]: I0124 07:14:06.341014 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.133628 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.216670 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-sg-core-conf-yaml\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.216731 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-combined-ca-bundle\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.216827 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-run-httpd\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.216906 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpsh8\" (UniqueName: \"kubernetes.io/projected/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-kube-api-access-zpsh8\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.217525 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.216935 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-log-httpd\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.217680 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-scripts\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.217682 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.217746 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-config-data\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.218358 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.218371 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.233512 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-scripts" (OuterVolumeSpecName: "scripts") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.244015 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-kube-api-access-zpsh8" (OuterVolumeSpecName: "kube-api-access-zpsh8") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "kube-api-access-zpsh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.270240 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.324159 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.324191 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpsh8\" (UniqueName: \"kubernetes.io/projected/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-kube-api-access-zpsh8\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.324201 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.355655 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.414483 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-config-data" (OuterVolumeSpecName: "config-data") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.425522 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.425570 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.766844 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" event={"ID":"827f33c6-ea9f-4312-9533-e952a218f464","Type":"ContainerStarted","Data":"692b01412cca7a95c030d0da68618054df44eaf3b20646d9b4064c305a011eb1"} Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.771231 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerDied","Data":"81db948dee5fd531f128023fa7c1f1df374b7bed408e07fff63892479d70b1a7"} Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.771283 4675 scope.go:117] "RemoveContainer" containerID="63a5ceae15c00468b72f0e7ce7ce829a62036f711da9d244e81a84d9b9ccde32" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.771445 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.786466 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" podStartSLOduration=2.591023916 podStartE2EDuration="14.786443693s" podCreationTimestamp="2026-01-24 07:13:53 +0000 UTC" firstStartedPulling="2026-01-24 07:13:54.754594584 +0000 UTC m=+1236.050699807" lastFinishedPulling="2026-01-24 07:14:06.950014361 +0000 UTC m=+1248.246119584" observedRunningTime="2026-01-24 07:14:07.784986418 +0000 UTC m=+1249.081091641" watchObservedRunningTime="2026-01-24 07:14:07.786443693 +0000 UTC m=+1249.082548916" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.803058 4675 scope.go:117] "RemoveContainer" containerID="533cfcafa7099ae8ad07bfe72a3c3fa2a275562290c2a449a880e413dbbff37c" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.820335 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.847453 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.848608 4675 scope.go:117] "RemoveContainer" containerID="f76537d2c23f9f75a2b333632e88cd2ff39df94b4f83529627b385a01b6c09da" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.867449 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:07 crc kubenswrapper[4675]: E0124 07:14:07.867975 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-central-agent" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.867997 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-central-agent" Jan 24 07:14:07 crc kubenswrapper[4675]: E0124 07:14:07.868035 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="sg-core" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868043 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="sg-core" Jan 24 07:14:07 crc kubenswrapper[4675]: E0124 07:14:07.868062 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-notification-agent" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868070 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-notification-agent" Jan 24 07:14:07 crc kubenswrapper[4675]: E0124 07:14:07.868096 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="proxy-httpd" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868105 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="proxy-httpd" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868312 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-notification-agent" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868339 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="sg-core" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868359 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-central-agent" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868374 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="proxy-httpd" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.870463 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.876696 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.885183 4675 scope.go:117] "RemoveContainer" containerID="fcf737f104b20f78a351793d0ea94c36f05fb857e02ce7c2d27ccc8c4ddfb7bb" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.885917 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.909342 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.938899 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-scripts\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.938962 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.939076 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-run-httpd\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.939111 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j2z6\" (UniqueName: \"kubernetes.io/projected/7e393df8-0787-4f26-a453-f7c9f27e91fc-kube-api-access-5j2z6\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.939147 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.939187 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-log-httpd\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.939213 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-config-data\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.040817 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-run-httpd\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.041795 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j2z6\" (UniqueName: \"kubernetes.io/projected/7e393df8-0787-4f26-a453-f7c9f27e91fc-kube-api-access-5j2z6\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.041851 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.041904 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-log-httpd\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.041948 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-config-data\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.041992 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-scripts\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.042018 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.041737 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-run-httpd\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.043960 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-log-httpd\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.048956 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.049336 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.050285 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-config-data\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.068194 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-scripts\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.093609 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j2z6\" (UniqueName: \"kubernetes.io/projected/7e393df8-0787-4f26-a453-f7c9f27e91fc-kube-api-access-5j2z6\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.202589 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.748624 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.783148 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerStarted","Data":"9de865949fb8da851d910199e62bdf7d8f635e9c18af4381c27697a109e5dc7e"} Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.952926 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" path="/var/lib/kubelet/pods/5c6dd852-74ce-4b09-b7db-9ea8618ecab8/volumes" Jan 24 07:14:09 crc kubenswrapper[4675]: I0124 07:14:09.796407 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerStarted","Data":"a92be86923b70a195053cf52494c1a3c5825dddfdcafc17c2cb726559a7f3895"} Jan 24 07:14:10 crc kubenswrapper[4675]: I0124 07:14:10.819100 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerStarted","Data":"19812846ac3208cccefd37580a4509d8bdce481219c264aed5b499dbce0110e9"} Jan 24 07:14:11 crc kubenswrapper[4675]: I0124 07:14:11.500633 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:11 crc kubenswrapper[4675]: I0124 07:14:11.829147 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerStarted","Data":"3db305910bfb883f2b1bb85ed7214811680a2995f50a0005be53ef0ba5f043b2"} Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.839902 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerStarted","Data":"6bf514ff8477cbc84448330c0f6bbe93f42113742f43f06d15a8b9195d342f37"} Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.840039 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-central-agent" containerID="cri-o://a92be86923b70a195053cf52494c1a3c5825dddfdcafc17c2cb726559a7f3895" gracePeriod=30 Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.840302 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-notification-agent" containerID="cri-o://19812846ac3208cccefd37580a4509d8bdce481219c264aed5b499dbce0110e9" gracePeriod=30 Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.840332 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="proxy-httpd" containerID="cri-o://6bf514ff8477cbc84448330c0f6bbe93f42113742f43f06d15a8b9195d342f37" gracePeriod=30 Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.840379 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.840303 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="sg-core" containerID="cri-o://3db305910bfb883f2b1bb85ed7214811680a2995f50a0005be53ef0ba5f043b2" gracePeriod=30 Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.860581 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.2228184779999998 podStartE2EDuration="5.860564118s" podCreationTimestamp="2026-01-24 07:14:07 +0000 UTC" firstStartedPulling="2026-01-24 07:14:08.759960439 +0000 UTC m=+1250.056065662" lastFinishedPulling="2026-01-24 07:14:12.397706079 +0000 UTC m=+1253.693811302" observedRunningTime="2026-01-24 07:14:12.859923533 +0000 UTC m=+1254.156028756" watchObservedRunningTime="2026-01-24 07:14:12.860564118 +0000 UTC m=+1254.156669341" Jan 24 07:14:13 crc kubenswrapper[4675]: I0124 07:14:13.850033 4675 generic.go:334] "Generic (PLEG): container finished" podID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerID="6bf514ff8477cbc84448330c0f6bbe93f42113742f43f06d15a8b9195d342f37" exitCode=0 Jan 24 07:14:13 crc kubenswrapper[4675]: I0124 07:14:13.850327 4675 generic.go:334] "Generic (PLEG): container finished" podID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerID="3db305910bfb883f2b1bb85ed7214811680a2995f50a0005be53ef0ba5f043b2" exitCode=2 Jan 24 07:14:13 crc kubenswrapper[4675]: I0124 07:14:13.850338 4675 generic.go:334] "Generic (PLEG): container finished" podID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerID="19812846ac3208cccefd37580a4509d8bdce481219c264aed5b499dbce0110e9" exitCode=0 Jan 24 07:14:13 crc kubenswrapper[4675]: I0124 07:14:13.851048 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerDied","Data":"6bf514ff8477cbc84448330c0f6bbe93f42113742f43f06d15a8b9195d342f37"} Jan 24 07:14:13 crc kubenswrapper[4675]: I0124 07:14:13.851093 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerDied","Data":"3db305910bfb883f2b1bb85ed7214811680a2995f50a0005be53ef0ba5f043b2"} Jan 24 07:14:13 crc kubenswrapper[4675]: I0124 07:14:13.851103 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerDied","Data":"19812846ac3208cccefd37580a4509d8bdce481219c264aed5b499dbce0110e9"} Jan 24 07:14:14 crc kubenswrapper[4675]: I0124 07:14:14.312977 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-656ff794dd-jx8ld" podUID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 24 07:14:15 crc kubenswrapper[4675]: I0124 07:14:15.877433 4675 generic.go:334] "Generic (PLEG): container finished" podID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerID="a92be86923b70a195053cf52494c1a3c5825dddfdcafc17c2cb726559a7f3895" exitCode=0 Jan 24 07:14:15 crc kubenswrapper[4675]: I0124 07:14:15.877525 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerDied","Data":"a92be86923b70a195053cf52494c1a3c5825dddfdcafc17c2cb726559a7f3895"} Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.023266 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.088208 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-sg-core-conf-yaml\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.088635 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j2z6\" (UniqueName: \"kubernetes.io/projected/7e393df8-0787-4f26-a453-f7c9f27e91fc-kube-api-access-5j2z6\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.089004 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-log-httpd\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.089490 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.089631 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-run-httpd\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.089778 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-scripts\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.089961 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-combined-ca-bundle\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.090115 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-config-data\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.090319 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.091270 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.091398 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.096027 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e393df8-0787-4f26-a453-f7c9f27e91fc-kube-api-access-5j2z6" (OuterVolumeSpecName: "kube-api-access-5j2z6") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "kube-api-access-5j2z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.104391 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-scripts" (OuterVolumeSpecName: "scripts") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.118711 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.173857 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.193557 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.193588 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j2z6\" (UniqueName: \"kubernetes.io/projected/7e393df8-0787-4f26-a453-f7c9f27e91fc-kube-api-access-5j2z6\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.193600 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.193612 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.202322 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-config-data" (OuterVolumeSpecName: "config-data") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.294996 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.892443 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerDied","Data":"9de865949fb8da851d910199e62bdf7d8f635e9c18af4381c27697a109e5dc7e"} Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.893646 4675 scope.go:117] "RemoveContainer" containerID="6bf514ff8477cbc84448330c0f6bbe93f42113742f43f06d15a8b9195d342f37" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.892526 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.925320 4675 scope.go:117] "RemoveContainer" containerID="3db305910bfb883f2b1bb85ed7214811680a2995f50a0005be53ef0ba5f043b2" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.963537 4675 scope.go:117] "RemoveContainer" containerID="19812846ac3208cccefd37580a4509d8bdce481219c264aed5b499dbce0110e9" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.980225 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.996684 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.009838 4675 scope.go:117] "RemoveContainer" containerID="a92be86923b70a195053cf52494c1a3c5825dddfdcafc17c2cb726559a7f3895" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.033924 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:17 crc kubenswrapper[4675]: E0124 07:14:17.034310 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="sg-core" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034323 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="sg-core" Jan 24 07:14:17 crc kubenswrapper[4675]: E0124 07:14:17.034341 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-central-agent" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034347 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-central-agent" Jan 24 07:14:17 crc kubenswrapper[4675]: E0124 07:14:17.034368 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="proxy-httpd" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034374 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="proxy-httpd" Jan 24 07:14:17 crc kubenswrapper[4675]: E0124 07:14:17.034384 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-notification-agent" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034390 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-notification-agent" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034569 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="proxy-httpd" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034580 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="sg-core" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034591 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-notification-agent" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034599 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-central-agent" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.036539 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.039967 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.040058 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.044132 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.213780 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-config-data\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.213839 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxvrq\" (UniqueName: \"kubernetes.io/projected/c11da70b-e611-45b4-af1b-fe7ac3dacb85-kube-api-access-bxvrq\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.213867 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.213910 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-log-httpd\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.214052 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-run-httpd\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.214155 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-scripts\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.214405 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.316002 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.316439 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-config-data\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.316646 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxvrq\" (UniqueName: \"kubernetes.io/projected/c11da70b-e611-45b4-af1b-fe7ac3dacb85-kube-api-access-bxvrq\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.316970 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.317206 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-log-httpd\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.317348 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-run-httpd\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.317504 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-scripts\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.318230 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-run-httpd\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.318389 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-log-httpd\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.323699 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-scripts\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.324598 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-config-data\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.331861 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.336283 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.337261 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxvrq\" (UniqueName: \"kubernetes.io/projected/c11da70b-e611-45b4-af1b-fe7ac3dacb85-kube-api-access-bxvrq\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.360768 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.830207 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.901276 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerStarted","Data":"588244b07f5a60e0cabab824de73b0c1ab641046dedb0b1f0652661018ee56f9"} Jan 24 07:14:18 crc kubenswrapper[4675]: I0124 07:14:18.912867 4675 generic.go:334] "Generic (PLEG): container finished" podID="827f33c6-ea9f-4312-9533-e952a218f464" containerID="692b01412cca7a95c030d0da68618054df44eaf3b20646d9b4064c305a011eb1" exitCode=0 Jan 24 07:14:18 crc kubenswrapper[4675]: I0124 07:14:18.912920 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" event={"ID":"827f33c6-ea9f-4312-9533-e952a218f464","Type":"ContainerDied","Data":"692b01412cca7a95c030d0da68618054df44eaf3b20646d9b4064c305a011eb1"} Jan 24 07:14:18 crc kubenswrapper[4675]: I0124 07:14:18.917548 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerStarted","Data":"63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04"} Jan 24 07:14:18 crc kubenswrapper[4675]: I0124 07:14:18.961594 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" path="/var/lib/kubelet/pods/7e393df8-0787-4f26-a453-f7c9f27e91fc/volumes" Jan 24 07:14:19 crc kubenswrapper[4675]: I0124 07:14:19.928877 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerStarted","Data":"15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9"} Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.322418 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.477028 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-combined-ca-bundle\") pod \"827f33c6-ea9f-4312-9533-e952a218f464\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.477092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-config-data\") pod \"827f33c6-ea9f-4312-9533-e952a218f464\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.477134 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-scripts\") pod \"827f33c6-ea9f-4312-9533-e952a218f464\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.477177 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rn4s\" (UniqueName: \"kubernetes.io/projected/827f33c6-ea9f-4312-9533-e952a218f464-kube-api-access-5rn4s\") pod \"827f33c6-ea9f-4312-9533-e952a218f464\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.500049 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/827f33c6-ea9f-4312-9533-e952a218f464-kube-api-access-5rn4s" (OuterVolumeSpecName: "kube-api-access-5rn4s") pod "827f33c6-ea9f-4312-9533-e952a218f464" (UID: "827f33c6-ea9f-4312-9533-e952a218f464"). InnerVolumeSpecName "kube-api-access-5rn4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.501422 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-scripts" (OuterVolumeSpecName: "scripts") pod "827f33c6-ea9f-4312-9533-e952a218f464" (UID: "827f33c6-ea9f-4312-9533-e952a218f464"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.506864 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-config-data" (OuterVolumeSpecName: "config-data") pod "827f33c6-ea9f-4312-9533-e952a218f464" (UID: "827f33c6-ea9f-4312-9533-e952a218f464"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.514077 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "827f33c6-ea9f-4312-9533-e952a218f464" (UID: "827f33c6-ea9f-4312-9533-e952a218f464"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.580903 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.580956 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.580967 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.580976 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rn4s\" (UniqueName: \"kubernetes.io/projected/827f33c6-ea9f-4312-9533-e952a218f464-kube-api-access-5rn4s\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.941200 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" event={"ID":"827f33c6-ea9f-4312-9533-e952a218f464","Type":"ContainerDied","Data":"1f30006b6a3b95bebf5a27838f6aeb1a6be45660ab015b24f0aa23631999e023"} Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.941247 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f30006b6a3b95bebf5a27838f6aeb1a6be45660ab015b24f0aa23631999e023" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.946793 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.959905 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerStarted","Data":"d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462"} Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.042241 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 24 07:14:21 crc kubenswrapper[4675]: E0124 07:14:21.042701 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="827f33c6-ea9f-4312-9533-e952a218f464" containerName="nova-cell0-conductor-db-sync" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.042740 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="827f33c6-ea9f-4312-9533-e952a218f464" containerName="nova-cell0-conductor-db-sync" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.042969 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="827f33c6-ea9f-4312-9533-e952a218f464" containerName="nova-cell0-conductor-db-sync" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.043667 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.046645 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-42gcl" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.047011 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.061287 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.090360 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a43606-cba1-4fca-93c4-a1937ee449cc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.090468 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a43606-cba1-4fca-93c4-a1937ee449cc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.090536 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdsqs\" (UniqueName: \"kubernetes.io/projected/a3a43606-cba1-4fca-93c4-a1937ee449cc-kube-api-access-qdsqs\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.192460 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a43606-cba1-4fca-93c4-a1937ee449cc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.192556 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdsqs\" (UniqueName: \"kubernetes.io/projected/a3a43606-cba1-4fca-93c4-a1937ee449cc-kube-api-access-qdsqs\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.192634 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a43606-cba1-4fca-93c4-a1937ee449cc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.198050 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a43606-cba1-4fca-93c4-a1937ee449cc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.206509 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a43606-cba1-4fca-93c4-a1937ee449cc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.211427 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdsqs\" (UniqueName: \"kubernetes.io/projected/a3a43606-cba1-4fca-93c4-a1937ee449cc-kube-api-access-qdsqs\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.361670 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.893390 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.972630 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerStarted","Data":"c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8"} Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.973634 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.978000 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a3a43606-cba1-4fca-93c4-a1937ee449cc","Type":"ContainerStarted","Data":"db1d3242c060eae2d124e04f7f8abe6aeb495bd394d21a072602d563d4cdf1d7"} Jan 24 07:14:22 crc kubenswrapper[4675]: I0124 07:14:22.006290 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.432710802 podStartE2EDuration="6.006268937s" podCreationTimestamp="2026-01-24 07:14:16 +0000 UTC" firstStartedPulling="2026-01-24 07:14:17.822916903 +0000 UTC m=+1259.119022126" lastFinishedPulling="2026-01-24 07:14:21.396475038 +0000 UTC m=+1262.692580261" observedRunningTime="2026-01-24 07:14:21.989788798 +0000 UTC m=+1263.285894021" watchObservedRunningTime="2026-01-24 07:14:22.006268937 +0000 UTC m=+1263.302374160" Jan 24 07:14:22 crc kubenswrapper[4675]: I0124 07:14:22.999700 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a3a43606-cba1-4fca-93c4-a1937ee449cc","Type":"ContainerStarted","Data":"60cd9e64b26927d2508b43e3b4c824146763a8d648e5dc2796029d66fc1099fe"} Jan 24 07:14:23 crc kubenswrapper[4675]: I0124 07:14:23.021058 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.021041273 podStartE2EDuration="2.021041273s" podCreationTimestamp="2026-01-24 07:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:23.013384997 +0000 UTC m=+1264.309490220" watchObservedRunningTime="2026-01-24 07:14:23.021041273 +0000 UTC m=+1264.317146496" Jan 24 07:14:24 crc kubenswrapper[4675]: I0124 07:14:24.007741 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:26 crc kubenswrapper[4675]: I0124 07:14:26.191685 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:14:27 crc kubenswrapper[4675]: I0124 07:14:27.779768 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:14:27 crc kubenswrapper[4675]: I0124 07:14:27.838150 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6565db7666-dt2lk"] Jan 24 07:14:27 crc kubenswrapper[4675]: I0124 07:14:27.838692 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon-log" containerID="cri-o://1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50" gracePeriod=30 Jan 24 07:14:27 crc kubenswrapper[4675]: I0124 07:14:27.838828 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" containerID="cri-o://32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3" gracePeriod=30 Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.077774 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6565db7666-dt2lk" event={"ID":"6462a086-070a-4998-8a59-cb4ccbf19867","Type":"ContainerDied","Data":"32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3"} Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.077708 4675 generic.go:334] "Generic (PLEG): container finished" podID="6462a086-070a-4998-8a59-cb4ccbf19867" containerID="32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3" exitCode=0 Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.396464 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.969575 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-5bzdh"] Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.971508 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.973334 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.981288 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.986444 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5bzdh"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.019982 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpbnp\" (UniqueName: \"kubernetes.io/projected/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-kube-api-access-gpbnp\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.020109 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.020184 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-config-data\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.020261 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-scripts\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.126925 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.127270 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-config-data\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.127321 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-scripts\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.127370 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpbnp\" (UniqueName: \"kubernetes.io/projected/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-kube-api-access-gpbnp\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.136796 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-scripts\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.144526 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.146847 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-config-data\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.166469 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpbnp\" (UniqueName: \"kubernetes.io/projected/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-kube-api-access-gpbnp\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.209642 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.210923 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.221605 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.243966 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.245681 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.261536 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.271522 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.291505 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.316336 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.336432 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.336532 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-config-data\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.336596 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh5zg\" (UniqueName: \"kubernetes.io/projected/d8e802ff-b559-4ef9-9826-708faf39b488-kube-api-access-lh5zg\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.361206 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.362477 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.374184 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.384238 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439394 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439446 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439470 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbjjc\" (UniqueName: \"kubernetes.io/projected/a382715e-bef1-47d2-872f-21ffbda9df32-kube-api-access-nbjjc\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439494 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439509 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439535 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62846c05-d38a-49de-8303-468e98254357-logs\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439557 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-config-data\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439600 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh5zg\" (UniqueName: \"kubernetes.io/projected/d8e802ff-b559-4ef9-9826-708faf39b488-kube-api-access-lh5zg\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439625 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rnh\" (UniqueName: \"kubernetes.io/projected/62846c05-d38a-49de-8303-468e98254357-kube-api-access-67rnh\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439671 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-config-data\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.452361 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-config-data\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.453554 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.478301 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.479937 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.488687 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh5zg\" (UniqueName: \"kubernetes.io/projected/d8e802ff-b559-4ef9-9826-708faf39b488-kube-api-access-lh5zg\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.506452 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.507728 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.541954 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rnh\" (UniqueName: \"kubernetes.io/projected/62846c05-d38a-49de-8303-468e98254357-kube-api-access-67rnh\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542047 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-config-data\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542145 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542172 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbjjc\" (UniqueName: \"kubernetes.io/projected/a382715e-bef1-47d2-872f-21ffbda9df32-kube-api-access-nbjjc\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542209 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542231 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542262 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62846c05-d38a-49de-8303-468e98254357-logs\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542790 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62846c05-d38a-49de-8303-468e98254357-logs\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.547051 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.555769 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.561359 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.562497 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-config-data\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.570578 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.599488 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbjjc\" (UniqueName: \"kubernetes.io/projected/a382715e-bef1-47d2-872f-21ffbda9df32-kube-api-access-nbjjc\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.619746 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rnh\" (UniqueName: \"kubernetes.io/projected/62846c05-d38a-49de-8303-468e98254357-kube-api-access-67rnh\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.646186 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-config-data\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.646240 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-logs\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.646357 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.646386 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9wtc\" (UniqueName: \"kubernetes.io/projected/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-kube-api-access-z9wtc\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.667031 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9mbx2"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.668548 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.684550 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9mbx2"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.689708 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752383 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752424 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9wtc\" (UniqueName: \"kubernetes.io/projected/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-kube-api-access-z9wtc\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752462 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-config-data\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752484 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-svc\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752502 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-logs\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752696 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-config\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752739 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752762 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmmvc\" (UniqueName: \"kubernetes.io/projected/a9bf7666-9ba5-43db-a358-1a2df0e0b118-kube-api-access-vmmvc\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752780 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752800 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.753688 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-logs\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.766320 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.768773 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-config-data\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.785512 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9wtc\" (UniqueName: \"kubernetes.io/projected/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-kube-api-access-z9wtc\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.854247 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-svc\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.854316 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-config\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.854343 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.854370 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmmvc\" (UniqueName: \"kubernetes.io/projected/a9bf7666-9ba5-43db-a358-1a2df0e0b118-kube-api-access-vmmvc\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.854390 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.854411 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.855507 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-config\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.856596 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.856898 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.856897 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-svc\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.857406 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.871164 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.881440 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmmvc\" (UniqueName: \"kubernetes.io/projected/a9bf7666-9ba5-43db-a358-1a2df0e0b118-kube-api-access-vmmvc\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.887738 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.006679 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.276888 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5bzdh"] Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.450100 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.647553 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:14:33 crc kubenswrapper[4675]: W0124 07:14:33.664952 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda382715e_bef1_47d2_872f_21ffbda9df32.slice/crio-738037063e3609a68332a4891fdfb34e8c71d23849dea8ebc3779041066480cc WatchSource:0}: Error finding container 738037063e3609a68332a4891fdfb34e8c71d23849dea8ebc3779041066480cc: Status 404 returned error can't find the container with id 738037063e3609a68332a4891fdfb34e8c71d23849dea8ebc3779041066480cc Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.746586 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:33 crc kubenswrapper[4675]: W0124 07:14:33.758327 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea2a5b84_4fe4_4a8c_8009_2d63a5faec3d.slice/crio-ecf037d65b10317cfeddb23c76ce24134159fa39c202d51456c959edfdbe96ff WatchSource:0}: Error finding container ecf037d65b10317cfeddb23c76ce24134159fa39c202d51456c959edfdbe96ff: Status 404 returned error can't find the container with id ecf037d65b10317cfeddb23c76ce24134159fa39c202d51456c959edfdbe96ff Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.855278 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9mbx2"] Jan 24 07:14:33 crc kubenswrapper[4675]: W0124 07:14:33.859324 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9bf7666_9ba5_43db_a358_1a2df0e0b118.slice/crio-0f3336fcb7e0683104f9711241a9a218a04294fb74555900473d7d223d3a17cb WatchSource:0}: Error finding container 0f3336fcb7e0683104f9711241a9a218a04294fb74555900473d7d223d3a17cb: Status 404 returned error can't find the container with id 0f3336fcb7e0683104f9711241a9a218a04294fb74555900473d7d223d3a17cb Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.985181 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2bwz"] Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.986680 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.989438 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.990332 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 24 07:14:34 crc kubenswrapper[4675]: W0124 07:14:34.017618 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62846c05_d38a_49de_8303_468e98254357.slice/crio-95d83f46bb241df69cecd5d5ee865b1d6990f0719b8c49d6308bc5af4308f02a WatchSource:0}: Error finding container 95d83f46bb241df69cecd5d5ee865b1d6990f0719b8c49d6308bc5af4308f02a: Status 404 returned error can't find the container with id 95d83f46bb241df69cecd5d5ee865b1d6990f0719b8c49d6308bc5af4308f02a Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.019172 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.056845 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2bwz"] Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.110827 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.110956 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7rgx\" (UniqueName: \"kubernetes.io/projected/a1819bfe-22cc-4ead-8e81-717ee70b2e83-kube-api-access-x7rgx\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.111029 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-config-data\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.111127 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-scripts\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.130632 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62846c05-d38a-49de-8303-468e98254357","Type":"ContainerStarted","Data":"95d83f46bb241df69cecd5d5ee865b1d6990f0719b8c49d6308bc5af4308f02a"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.132999 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d","Type":"ContainerStarted","Data":"ecf037d65b10317cfeddb23c76ce24134159fa39c202d51456c959edfdbe96ff"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.136529 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a382715e-bef1-47d2-872f-21ffbda9df32","Type":"ContainerStarted","Data":"738037063e3609a68332a4891fdfb34e8c71d23849dea8ebc3779041066480cc"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.139682 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8e802ff-b559-4ef9-9826-708faf39b488","Type":"ContainerStarted","Data":"2cdb85f9986f03860143bd1424df78b4a24959bedd905d66093742accf17b4a5"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.142757 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" event={"ID":"a9bf7666-9ba5-43db-a358-1a2df0e0b118","Type":"ContainerStarted","Data":"7f4de3b3644f7a4cb5893a806c2e209a553b8896d0ec64835b19a118ea983566"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.142794 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" event={"ID":"a9bf7666-9ba5-43db-a358-1a2df0e0b118","Type":"ContainerStarted","Data":"0f3336fcb7e0683104f9711241a9a218a04294fb74555900473d7d223d3a17cb"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.148486 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.152601 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5bzdh" event={"ID":"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284","Type":"ContainerStarted","Data":"78f8083eacc7c22ec9dea19e2beb1b5b4e3cc8fc1e0078f1f2502d6499fe0c24"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.152651 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5bzdh" event={"ID":"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284","Type":"ContainerStarted","Data":"8ba554aa39535408d6839b491d0293ee2d7ef9fe4bc35c53280c69ac8fd7419a"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.189212 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-5bzdh" podStartSLOduration=3.189188613 podStartE2EDuration="3.189188613s" podCreationTimestamp="2026-01-24 07:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:34.186195721 +0000 UTC m=+1275.482300954" watchObservedRunningTime="2026-01-24 07:14:34.189188613 +0000 UTC m=+1275.485293836" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.214219 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.214289 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7rgx\" (UniqueName: \"kubernetes.io/projected/a1819bfe-22cc-4ead-8e81-717ee70b2e83-kube-api-access-x7rgx\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.214331 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-config-data\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.214367 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-scripts\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.220257 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.223632 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-config-data\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.226193 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-scripts\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.237479 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7rgx\" (UniqueName: \"kubernetes.io/projected/a1819bfe-22cc-4ead-8e81-717ee70b2e83-kube-api-access-x7rgx\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.325340 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.984560 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2bwz"] Jan 24 07:14:35 crc kubenswrapper[4675]: W0124 07:14:35.001326 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1819bfe_22cc_4ead_8e81_717ee70b2e83.slice/crio-d3851889a5cd9dde444f8479f3ed9c32e184c96dd7febac892827c7115ef2056 WatchSource:0}: Error finding container d3851889a5cd9dde444f8479f3ed9c32e184c96dd7febac892827c7115ef2056: Status 404 returned error can't find the container with id d3851889a5cd9dde444f8479f3ed9c32e184c96dd7febac892827c7115ef2056 Jan 24 07:14:35 crc kubenswrapper[4675]: I0124 07:14:35.166829 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" event={"ID":"a1819bfe-22cc-4ead-8e81-717ee70b2e83","Type":"ContainerStarted","Data":"d3851889a5cd9dde444f8479f3ed9c32e184c96dd7febac892827c7115ef2056"} Jan 24 07:14:35 crc kubenswrapper[4675]: I0124 07:14:35.178450 4675 generic.go:334] "Generic (PLEG): container finished" podID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerID="7f4de3b3644f7a4cb5893a806c2e209a553b8896d0ec64835b19a118ea983566" exitCode=0 Jan 24 07:14:35 crc kubenswrapper[4675]: I0124 07:14:35.178833 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" event={"ID":"a9bf7666-9ba5-43db-a358-1a2df0e0b118","Type":"ContainerDied","Data":"7f4de3b3644f7a4cb5893a806c2e209a553b8896d0ec64835b19a118ea983566"} Jan 24 07:14:35 crc kubenswrapper[4675]: I0124 07:14:35.178875 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" event={"ID":"a9bf7666-9ba5-43db-a358-1a2df0e0b118","Type":"ContainerStarted","Data":"c1ed323221939791011d988310c5e1001dcc2cf9dcc422d083610000da9a42e7"} Jan 24 07:14:35 crc kubenswrapper[4675]: I0124 07:14:35.178983 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:35 crc kubenswrapper[4675]: I0124 07:14:35.199017 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" podStartSLOduration=3.198999368 podStartE2EDuration="3.198999368s" podCreationTimestamp="2026-01-24 07:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:35.194505719 +0000 UTC m=+1276.490610942" watchObservedRunningTime="2026-01-24 07:14:35.198999368 +0000 UTC m=+1276.495104591" Jan 24 07:14:36 crc kubenswrapper[4675]: I0124 07:14:36.001247 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:36 crc kubenswrapper[4675]: I0124 07:14:36.030074 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:14:36 crc kubenswrapper[4675]: I0124 07:14:36.199480 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" event={"ID":"a1819bfe-22cc-4ead-8e81-717ee70b2e83","Type":"ContainerStarted","Data":"28e02e05a169961e6a8905b7cf18ce1c42ea2b78ddd06aee7b4a61c2126390af"} Jan 24 07:14:38 crc kubenswrapper[4675]: I0124 07:14:38.985024 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" podStartSLOduration=5.985007813 podStartE2EDuration="5.985007813s" podCreationTimestamp="2026-01-24 07:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:36.222429034 +0000 UTC m=+1277.518534257" watchObservedRunningTime="2026-01-24 07:14:38.985007813 +0000 UTC m=+1280.281113036" Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.241374 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62846c05-d38a-49de-8303-468e98254357","Type":"ContainerStarted","Data":"80d35df00d2d5035adc3b9822734918a3159061668385f60be9f212c4e98eb41"} Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.241430 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62846c05-d38a-49de-8303-468e98254357","Type":"ContainerStarted","Data":"224184ae9f63569e4fa7815af3ba54297cb52818ef444f0c88b51bc4890bb311"} Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.244161 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d","Type":"ContainerStarted","Data":"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7"} Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.244216 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d","Type":"ContainerStarted","Data":"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877"} Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.244409 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-log" containerID="cri-o://54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877" gracePeriod=30 Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.244574 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-metadata" containerID="cri-o://ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7" gracePeriod=30 Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.266180 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a382715e-bef1-47d2-872f-21ffbda9df32","Type":"ContainerStarted","Data":"f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a"} Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.266575 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a382715e-bef1-47d2-872f-21ffbda9df32" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a" gracePeriod=30 Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.270992 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8e802ff-b559-4ef9-9826-708faf39b488","Type":"ContainerStarted","Data":"297c27eaec941202545a9b1c1221cdfb969619ef7e96bb0f2b060bef18a2b54d"} Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.304080 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.244932922 podStartE2EDuration="7.304057315s" podCreationTimestamp="2026-01-24 07:14:32 +0000 UTC" firstStartedPulling="2026-01-24 07:14:34.026256334 +0000 UTC m=+1275.322361557" lastFinishedPulling="2026-01-24 07:14:38.085380727 +0000 UTC m=+1279.381485950" observedRunningTime="2026-01-24 07:14:39.266141876 +0000 UTC m=+1280.562247099" watchObservedRunningTime="2026-01-24 07:14:39.304057315 +0000 UTC m=+1280.600162528" Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.340847 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.040276023 podStartE2EDuration="7.340824707s" podCreationTimestamp="2026-01-24 07:14:32 +0000 UTC" firstStartedPulling="2026-01-24 07:14:33.766835057 +0000 UTC m=+1275.062940280" lastFinishedPulling="2026-01-24 07:14:38.067383731 +0000 UTC m=+1279.363488964" observedRunningTime="2026-01-24 07:14:39.287054713 +0000 UTC m=+1280.583159936" watchObservedRunningTime="2026-01-24 07:14:39.340824707 +0000 UTC m=+1280.636929930" Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.342431 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.94163684 podStartE2EDuration="7.342420125s" podCreationTimestamp="2026-01-24 07:14:32 +0000 UTC" firstStartedPulling="2026-01-24 07:14:33.666576626 +0000 UTC m=+1274.962681849" lastFinishedPulling="2026-01-24 07:14:38.067359911 +0000 UTC m=+1279.363465134" observedRunningTime="2026-01-24 07:14:39.311184907 +0000 UTC m=+1280.607290130" watchObservedRunningTime="2026-01-24 07:14:39.342420125 +0000 UTC m=+1280.638525348" Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.367009 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.838228634 podStartE2EDuration="7.36698995s" podCreationTimestamp="2026-01-24 07:14:32 +0000 UTC" firstStartedPulling="2026-01-24 07:14:33.537333074 +0000 UTC m=+1274.833438297" lastFinishedPulling="2026-01-24 07:14:38.06609439 +0000 UTC m=+1279.362199613" observedRunningTime="2026-01-24 07:14:39.324325207 +0000 UTC m=+1280.620430440" watchObservedRunningTime="2026-01-24 07:14:39.36698995 +0000 UTC m=+1280.663095173" Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.896080 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.043794 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-config-data\") pod \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.043873 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9wtc\" (UniqueName: \"kubernetes.io/projected/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-kube-api-access-z9wtc\") pod \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.043931 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-combined-ca-bundle\") pod \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.043991 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-logs\") pod \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.046066 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-logs" (OuterVolumeSpecName: "logs") pod "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" (UID: "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.049191 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-kube-api-access-z9wtc" (OuterVolumeSpecName: "kube-api-access-z9wtc") pod "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" (UID: "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d"). InnerVolumeSpecName "kube-api-access-z9wtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.075034 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" (UID: "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.081450 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-config-data" (OuterVolumeSpecName: "config-data") pod "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" (UID: "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.146686 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.146739 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9wtc\" (UniqueName: \"kubernetes.io/projected/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-kube-api-access-z9wtc\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.146751 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.146760 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.280286 4675 generic.go:334] "Generic (PLEG): container finished" podID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerID="ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7" exitCode=0 Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.280327 4675 generic.go:334] "Generic (PLEG): container finished" podID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerID="54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877" exitCode=143 Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.281109 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.285931 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d","Type":"ContainerDied","Data":"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7"} Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.286222 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d","Type":"ContainerDied","Data":"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877"} Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.286267 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d","Type":"ContainerDied","Data":"ecf037d65b10317cfeddb23c76ce24134159fa39c202d51456c959edfdbe96ff"} Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.286289 4675 scope.go:117] "RemoveContainer" containerID="ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.321929 4675 scope.go:117] "RemoveContainer" containerID="54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.331570 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.346687 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.353143 4675 scope.go:117] "RemoveContainer" containerID="ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7" Jan 24 07:14:40 crc kubenswrapper[4675]: E0124 07:14:40.353699 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7\": container with ID starting with ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7 not found: ID does not exist" containerID="ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.353756 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7"} err="failed to get container status \"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7\": rpc error: code = NotFound desc = could not find container \"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7\": container with ID starting with ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7 not found: ID does not exist" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.353780 4675 scope.go:117] "RemoveContainer" containerID="54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877" Jan 24 07:14:40 crc kubenswrapper[4675]: E0124 07:14:40.356744 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877\": container with ID starting with 54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877 not found: ID does not exist" containerID="54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.356781 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877"} err="failed to get container status \"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877\": rpc error: code = NotFound desc = could not find container \"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877\": container with ID starting with 54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877 not found: ID does not exist" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.356809 4675 scope.go:117] "RemoveContainer" containerID="ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.358414 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7"} err="failed to get container status \"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7\": rpc error: code = NotFound desc = could not find container \"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7\": container with ID starting with ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7 not found: ID does not exist" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.358458 4675 scope.go:117] "RemoveContainer" containerID="54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.358850 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877"} err="failed to get container status \"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877\": rpc error: code = NotFound desc = could not find container \"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877\": container with ID starting with 54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877 not found: ID does not exist" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.373645 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:40 crc kubenswrapper[4675]: E0124 07:14:40.374212 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-metadata" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.374231 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-metadata" Jan 24 07:14:40 crc kubenswrapper[4675]: E0124 07:14:40.374269 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-log" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.374276 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-log" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.374501 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-log" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.374516 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-metadata" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.375787 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.397365 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.397491 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.411470 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.553751 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/422994aa-4835-4b8e-bc15-ea6e636ffa7f-logs\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.553795 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lbmz\" (UniqueName: \"kubernetes.io/projected/422994aa-4835-4b8e-bc15-ea6e636ffa7f-kube-api-access-8lbmz\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.554029 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-config-data\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.554123 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.554225 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.656284 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-config-data\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.656348 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.656403 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.656466 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/422994aa-4835-4b8e-bc15-ea6e636ffa7f-logs\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.656491 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lbmz\" (UniqueName: \"kubernetes.io/projected/422994aa-4835-4b8e-bc15-ea6e636ffa7f-kube-api-access-8lbmz\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.657217 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/422994aa-4835-4b8e-bc15-ea6e636ffa7f-logs\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.661678 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.662951 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.671357 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-config-data\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.691260 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lbmz\" (UniqueName: \"kubernetes.io/projected/422994aa-4835-4b8e-bc15-ea6e636ffa7f-kube-api-access-8lbmz\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.717790 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.958124 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" path="/var/lib/kubelet/pods/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d/volumes" Jan 24 07:14:41 crc kubenswrapper[4675]: I0124 07:14:41.234342 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:41 crc kubenswrapper[4675]: W0124 07:14:41.239305 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod422994aa_4835_4b8e_bc15_ea6e636ffa7f.slice/crio-24367b6cdd690c92cd23c350b478002fb6c059c715c154bab90f36addc6d5c2b WatchSource:0}: Error finding container 24367b6cdd690c92cd23c350b478002fb6c059c715c154bab90f36addc6d5c2b: Status 404 returned error can't find the container with id 24367b6cdd690c92cd23c350b478002fb6c059c715c154bab90f36addc6d5c2b Jan 24 07:14:41 crc kubenswrapper[4675]: I0124 07:14:41.292585 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"422994aa-4835-4b8e-bc15-ea6e636ffa7f","Type":"ContainerStarted","Data":"24367b6cdd690c92cd23c350b478002fb6c059c715c154bab90f36addc6d5c2b"} Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.304120 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"422994aa-4835-4b8e-bc15-ea6e636ffa7f","Type":"ContainerStarted","Data":"9c00edb5fc0391611ce0b09014e6e6283b119256783e2c40dd2a51a58d34b102"} Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.304522 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"422994aa-4835-4b8e-bc15-ea6e636ffa7f","Type":"ContainerStarted","Data":"604a83d4a0748a805ac99fbbb18ec1512a8c5b523f97483e353bcf4a2cdd04c9"} Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.332929 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.332908167 podStartE2EDuration="2.332908167s" podCreationTimestamp="2026-01-24 07:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:42.322856313 +0000 UTC m=+1283.618961556" watchObservedRunningTime="2026-01-24 07:14:42.332908167 +0000 UTC m=+1283.629013390" Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.558693 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.558775 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.585546 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.690316 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.888206 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.888283 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.008952 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.097373 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g9hmc"] Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.097609 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerName="dnsmasq-dns" containerID="cri-o://80eec3e9dcbbc1cd44130b29a91f156fcae83d34ac57cf18dfb9d0209ee3b6b5" gracePeriod=10 Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.313385 4675 generic.go:334] "Generic (PLEG): container finished" podID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerID="80eec3e9dcbbc1cd44130b29a91f156fcae83d34ac57cf18dfb9d0209ee3b6b5" exitCode=0 Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.313428 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" event={"ID":"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b","Type":"ContainerDied","Data":"80eec3e9dcbbc1cd44130b29a91f156fcae83d34ac57cf18dfb9d0209ee3b6b5"} Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.315993 4675 generic.go:334] "Generic (PLEG): container finished" podID="5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" containerID="78f8083eacc7c22ec9dea19e2beb1b5b4e3cc8fc1e0078f1f2502d6499fe0c24" exitCode=0 Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.316065 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5bzdh" event={"ID":"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284","Type":"ContainerDied","Data":"78f8083eacc7c22ec9dea19e2beb1b5b4e3cc8fc1e0078f1f2502d6499fe0c24"} Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.367107 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.729160 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.814598 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-swift-storage-0\") pod \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.814685 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-config\") pod \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.815542 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-svc\") pod \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.815821 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsndj\" (UniqueName: \"kubernetes.io/projected/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-kube-api-access-hsndj\") pod \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.815845 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-nb\") pod \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.815947 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-sb\") pod \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.828059 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-kube-api-access-hsndj" (OuterVolumeSpecName: "kube-api-access-hsndj") pod "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" (UID: "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b"). InnerVolumeSpecName "kube-api-access-hsndj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.888891 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.902606 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" (UID: "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.905505 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" (UID: "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.919309 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.919351 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsndj\" (UniqueName: \"kubernetes.io/projected/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-kube-api-access-hsndj\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.919363 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.922104 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-config" (OuterVolumeSpecName: "config") pod "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" (UID: "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.931925 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.944068 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" (UID: "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.957607 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" (UID: "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.020594 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.020628 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.020638 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.148820 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.326597 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" event={"ID":"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b","Type":"ContainerDied","Data":"40d5e03d905e545c8ea9211fa30ab877f25abe89a569e93e4fe8d108c5f0d55a"} Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.326639 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.326685 4675 scope.go:117] "RemoveContainer" containerID="80eec3e9dcbbc1cd44130b29a91f156fcae83d34ac57cf18dfb9d0209ee3b6b5" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.329022 4675 generic.go:334] "Generic (PLEG): container finished" podID="a1819bfe-22cc-4ead-8e81-717ee70b2e83" containerID="28e02e05a169961e6a8905b7cf18ce1c42ea2b78ddd06aee7b4a61c2126390af" exitCode=0 Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.329189 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" event={"ID":"a1819bfe-22cc-4ead-8e81-717ee70b2e83","Type":"ContainerDied","Data":"28e02e05a169961e6a8905b7cf18ce1c42ea2b78ddd06aee7b4a61c2126390af"} Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.389150 4675 scope.go:117] "RemoveContainer" containerID="e5be4111b174b0893302ec72491db10290b0324936ef7df715e08fe66a0569cc" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.389165 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g9hmc"] Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.399908 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g9hmc"] Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.743607 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.837301 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpbnp\" (UniqueName: \"kubernetes.io/projected/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-kube-api-access-gpbnp\") pod \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.837347 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-combined-ca-bundle\") pod \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.837452 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-scripts\") pod \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.837491 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-config-data\") pod \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.843425 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-scripts" (OuterVolumeSpecName: "scripts") pod "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" (UID: "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.844650 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-kube-api-access-gpbnp" (OuterVolumeSpecName: "kube-api-access-gpbnp") pod "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" (UID: "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284"). InnerVolumeSpecName "kube-api-access-gpbnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.867456 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" (UID: "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.875344 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-config-data" (OuterVolumeSpecName: "config-data") pod "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" (UID: "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.939852 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.939890 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpbnp\" (UniqueName: \"kubernetes.io/projected/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-kube-api-access-gpbnp\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.940104 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.940117 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.959245 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" path="/var/lib/kubelet/pods/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b/volumes" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.344126 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.344249 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5bzdh" event={"ID":"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284","Type":"ContainerDied","Data":"8ba554aa39535408d6839b491d0293ee2d7ef9fe4bc35c53280c69ac8fd7419a"} Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.344289 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ba554aa39535408d6839b491d0293ee2d7ef9fe4bc35c53280c69ac8fd7419a" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.560372 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.560872 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-log" containerID="cri-o://224184ae9f63569e4fa7815af3ba54297cb52818ef444f0c88b51bc4890bb311" gracePeriod=30 Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.560955 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-api" containerID="cri-o://80d35df00d2d5035adc3b9822734918a3159061668385f60be9f212c4e98eb41" gracePeriod=30 Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.581679 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.581864 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d8e802ff-b559-4ef9-9826-708faf39b488" containerName="nova-scheduler-scheduler" containerID="cri-o://297c27eaec941202545a9b1c1221cdfb969619ef7e96bb0f2b060bef18a2b54d" gracePeriod=30 Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.607060 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.607273 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-log" containerID="cri-o://604a83d4a0748a805ac99fbbb18ec1512a8c5b523f97483e353bcf4a2cdd04c9" gracePeriod=30 Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.607753 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-metadata" containerID="cri-o://9c00edb5fc0391611ce0b09014e6e6283b119256783e2c40dd2a51a58d34b102" gracePeriod=30 Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.721839 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.721903 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.892502 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.961777 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-scripts\") pod \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.961905 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-config-data\") pod \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.961933 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-combined-ca-bundle\") pod \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.962021 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7rgx\" (UniqueName: \"kubernetes.io/projected/a1819bfe-22cc-4ead-8e81-717ee70b2e83-kube-api-access-x7rgx\") pod \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.965578 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-scripts" (OuterVolumeSpecName: "scripts") pod "a1819bfe-22cc-4ead-8e81-717ee70b2e83" (UID: "a1819bfe-22cc-4ead-8e81-717ee70b2e83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.969896 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1819bfe-22cc-4ead-8e81-717ee70b2e83-kube-api-access-x7rgx" (OuterVolumeSpecName: "kube-api-access-x7rgx") pod "a1819bfe-22cc-4ead-8e81-717ee70b2e83" (UID: "a1819bfe-22cc-4ead-8e81-717ee70b2e83"). InnerVolumeSpecName "kube-api-access-x7rgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.987589 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-config-data" (OuterVolumeSpecName: "config-data") pod "a1819bfe-22cc-4ead-8e81-717ee70b2e83" (UID: "a1819bfe-22cc-4ead-8e81-717ee70b2e83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.988962 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1819bfe-22cc-4ead-8e81-717ee70b2e83" (UID: "a1819bfe-22cc-4ead-8e81-717ee70b2e83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.064677 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.064758 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.064769 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7rgx\" (UniqueName: \"kubernetes.io/projected/a1819bfe-22cc-4ead-8e81-717ee70b2e83-kube-api-access-x7rgx\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.064778 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.354890 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" event={"ID":"a1819bfe-22cc-4ead-8e81-717ee70b2e83","Type":"ContainerDied","Data":"d3851889a5cd9dde444f8479f3ed9c32e184c96dd7febac892827c7115ef2056"} Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.355197 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3851889a5cd9dde444f8479f3ed9c32e184c96dd7febac892827c7115ef2056" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.354912 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.365964 4675 generic.go:334] "Generic (PLEG): container finished" podID="d8e802ff-b559-4ef9-9826-708faf39b488" containerID="297c27eaec941202545a9b1c1221cdfb969619ef7e96bb0f2b060bef18a2b54d" exitCode=0 Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.366058 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8e802ff-b559-4ef9-9826-708faf39b488","Type":"ContainerDied","Data":"297c27eaec941202545a9b1c1221cdfb969619ef7e96bb0f2b060bef18a2b54d"} Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.373383 4675 generic.go:334] "Generic (PLEG): container finished" podID="62846c05-d38a-49de-8303-468e98254357" containerID="224184ae9f63569e4fa7815af3ba54297cb52818ef444f0c88b51bc4890bb311" exitCode=143 Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.373450 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62846c05-d38a-49de-8303-468e98254357","Type":"ContainerDied","Data":"224184ae9f63569e4fa7815af3ba54297cb52818ef444f0c88b51bc4890bb311"} Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.377697 4675 generic.go:334] "Generic (PLEG): container finished" podID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerID="9c00edb5fc0391611ce0b09014e6e6283b119256783e2c40dd2a51a58d34b102" exitCode=0 Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.377749 4675 generic.go:334] "Generic (PLEG): container finished" podID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerID="604a83d4a0748a805ac99fbbb18ec1512a8c5b523f97483e353bcf4a2cdd04c9" exitCode=143 Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.377773 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"422994aa-4835-4b8e-bc15-ea6e636ffa7f","Type":"ContainerDied","Data":"9c00edb5fc0391611ce0b09014e6e6283b119256783e2c40dd2a51a58d34b102"} Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.377798 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"422994aa-4835-4b8e-bc15-ea6e636ffa7f","Type":"ContainerDied","Data":"604a83d4a0748a805ac99fbbb18ec1512a8c5b523f97483e353bcf4a2cdd04c9"} Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.442872 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 24 07:14:46 crc kubenswrapper[4675]: E0124 07:14:46.443281 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" containerName="nova-manage" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443295 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" containerName="nova-manage" Jan 24 07:14:46 crc kubenswrapper[4675]: E0124 07:14:46.443313 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerName="dnsmasq-dns" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443319 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerName="dnsmasq-dns" Jan 24 07:14:46 crc kubenswrapper[4675]: E0124 07:14:46.443332 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerName="init" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443339 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerName="init" Jan 24 07:14:46 crc kubenswrapper[4675]: E0124 07:14:46.443353 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1819bfe-22cc-4ead-8e81-717ee70b2e83" containerName="nova-cell1-conductor-db-sync" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443359 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1819bfe-22cc-4ead-8e81-717ee70b2e83" containerName="nova-cell1-conductor-db-sync" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443514 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" containerName="nova-manage" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443542 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1819bfe-22cc-4ead-8e81-717ee70b2e83" containerName="nova-cell1-conductor-db-sync" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443550 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerName="dnsmasq-dns" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.444125 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.446756 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.453470 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.573003 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.573077 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvl4w\" (UniqueName: \"kubernetes.io/projected/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-kube-api-access-hvl4w\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.573317 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.675054 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.675185 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.675229 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvl4w\" (UniqueName: \"kubernetes.io/projected/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-kube-api-access-hvl4w\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.681943 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.691938 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.698113 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvl4w\" (UniqueName: \"kubernetes.io/projected/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-kube-api-access-hvl4w\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.768370 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.770761 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.776684 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877407 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-config-data\") pod \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877467 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh5zg\" (UniqueName: \"kubernetes.io/projected/d8e802ff-b559-4ef9-9826-708faf39b488-kube-api-access-lh5zg\") pod \"d8e802ff-b559-4ef9-9826-708faf39b488\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877526 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-combined-ca-bundle\") pod \"d8e802ff-b559-4ef9-9826-708faf39b488\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877597 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lbmz\" (UniqueName: \"kubernetes.io/projected/422994aa-4835-4b8e-bc15-ea6e636ffa7f-kube-api-access-8lbmz\") pod \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877642 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-config-data\") pod \"d8e802ff-b559-4ef9-9826-708faf39b488\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877667 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/422994aa-4835-4b8e-bc15-ea6e636ffa7f-logs\") pod \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877699 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-combined-ca-bundle\") pod \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877820 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-nova-metadata-tls-certs\") pod \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.880680 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/422994aa-4835-4b8e-bc15-ea6e636ffa7f-logs" (OuterVolumeSpecName: "logs") pod "422994aa-4835-4b8e-bc15-ea6e636ffa7f" (UID: "422994aa-4835-4b8e-bc15-ea6e636ffa7f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.883384 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422994aa-4835-4b8e-bc15-ea6e636ffa7f-kube-api-access-8lbmz" (OuterVolumeSpecName: "kube-api-access-8lbmz") pod "422994aa-4835-4b8e-bc15-ea6e636ffa7f" (UID: "422994aa-4835-4b8e-bc15-ea6e636ffa7f"). InnerVolumeSpecName "kube-api-access-8lbmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.907312 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e802ff-b559-4ef9-9826-708faf39b488-kube-api-access-lh5zg" (OuterVolumeSpecName: "kube-api-access-lh5zg") pod "d8e802ff-b559-4ef9-9826-708faf39b488" (UID: "d8e802ff-b559-4ef9-9826-708faf39b488"). InnerVolumeSpecName "kube-api-access-lh5zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.924986 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8e802ff-b559-4ef9-9826-708faf39b488" (UID: "d8e802ff-b559-4ef9-9826-708faf39b488"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.930811 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "422994aa-4835-4b8e-bc15-ea6e636ffa7f" (UID: "422994aa-4835-4b8e-bc15-ea6e636ffa7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.937898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-config-data" (OuterVolumeSpecName: "config-data") pod "422994aa-4835-4b8e-bc15-ea6e636ffa7f" (UID: "422994aa-4835-4b8e-bc15-ea6e636ffa7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.954397 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-config-data" (OuterVolumeSpecName: "config-data") pod "d8e802ff-b559-4ef9-9826-708faf39b488" (UID: "d8e802ff-b559-4ef9-9826-708faf39b488"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.962459 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "422994aa-4835-4b8e-bc15-ea6e636ffa7f" (UID: "422994aa-4835-4b8e-bc15-ea6e636ffa7f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982109 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/422994aa-4835-4b8e-bc15-ea6e636ffa7f-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982226 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982320 4675 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982380 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982435 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh5zg\" (UniqueName: \"kubernetes.io/projected/d8e802ff-b559-4ef9-9826-708faf39b488-kube-api-access-lh5zg\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982545 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982605 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lbmz\" (UniqueName: \"kubernetes.io/projected/422994aa-4835-4b8e-bc15-ea6e636ffa7f-kube-api-access-8lbmz\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982667 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.258606 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: W0124 07:14:47.260295 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8afe3d83_5678_47e9_be7d_dfbf50fa5bc9.slice/crio-6713ecdcb75cf2fecbada736f12deca6abec995ff5680b5591ca3fa886028fee WatchSource:0}: Error finding container 6713ecdcb75cf2fecbada736f12deca6abec995ff5680b5591ca3fa886028fee: Status 404 returned error can't find the container with id 6713ecdcb75cf2fecbada736f12deca6abec995ff5680b5591ca3fa886028fee Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.371811 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.396256 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9","Type":"ContainerStarted","Data":"6713ecdcb75cf2fecbada736f12deca6abec995ff5680b5591ca3fa886028fee"} Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.424149 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"422994aa-4835-4b8e-bc15-ea6e636ffa7f","Type":"ContainerDied","Data":"24367b6cdd690c92cd23c350b478002fb6c059c715c154bab90f36addc6d5c2b"} Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.424210 4675 scope.go:117] "RemoveContainer" containerID="9c00edb5fc0391611ce0b09014e6e6283b119256783e2c40dd2a51a58d34b102" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.424387 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.433772 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8e802ff-b559-4ef9-9826-708faf39b488","Type":"ContainerDied","Data":"2cdb85f9986f03860143bd1424df78b4a24959bedd905d66093742accf17b4a5"} Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.433969 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.465788 4675 scope.go:117] "RemoveContainer" containerID="604a83d4a0748a805ac99fbbb18ec1512a8c5b523f97483e353bcf4a2cdd04c9" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.492474 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.520196 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.523982 4675 scope.go:117] "RemoveContainer" containerID="297c27eaec941202545a9b1c1221cdfb969619ef7e96bb0f2b060bef18a2b54d" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.545967 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.557937 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: E0124 07:14:47.558298 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-log" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.558329 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-log" Jan 24 07:14:47 crc kubenswrapper[4675]: E0124 07:14:47.558363 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e802ff-b559-4ef9-9826-708faf39b488" containerName="nova-scheduler-scheduler" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.558370 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e802ff-b559-4ef9-9826-708faf39b488" containerName="nova-scheduler-scheduler" Jan 24 07:14:47 crc kubenswrapper[4675]: E0124 07:14:47.558382 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-metadata" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.558388 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-metadata" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.558556 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-metadata" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.558576 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8e802ff-b559-4ef9-9826-708faf39b488" containerName="nova-scheduler-scheduler" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.558590 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-log" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.559312 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.564568 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.570238 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.581670 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.594041 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.595589 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.598111 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.598292 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.607818 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715693 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-config-data\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715744 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3534ca-1196-47a7-889c-cead596f7636-logs\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715784 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715813 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-config-data\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715830 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xhxb\" (UniqueName: \"kubernetes.io/projected/bc3534ca-1196-47a7-889c-cead596f7636-kube-api-access-9xhxb\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715870 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715927 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715947 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb9jn\" (UniqueName: \"kubernetes.io/projected/39c6c830-f77b-47f7-a874-02324d6c8c39-kube-api-access-hb9jn\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817149 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-config-data\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817196 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xhxb\" (UniqueName: \"kubernetes.io/projected/bc3534ca-1196-47a7-889c-cead596f7636-kube-api-access-9xhxb\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817244 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817304 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817327 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb9jn\" (UniqueName: \"kubernetes.io/projected/39c6c830-f77b-47f7-a874-02324d6c8c39-kube-api-access-hb9jn\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817372 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-config-data\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817391 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3534ca-1196-47a7-889c-cead596f7636-logs\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817421 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.818660 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3534ca-1196-47a7-889c-cead596f7636-logs\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.821155 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-config-data\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.821937 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.822168 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.823256 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-config-data\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.824121 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.834654 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xhxb\" (UniqueName: \"kubernetes.io/projected/bc3534ca-1196-47a7-889c-cead596f7636-kube-api-access-9xhxb\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.837041 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb9jn\" (UniqueName: \"kubernetes.io/projected/39c6c830-f77b-47f7-a874-02324d6c8c39-kube-api-access-hb9jn\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.909504 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.928283 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.416669 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.426024 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:48 crc kubenswrapper[4675]: W0124 07:14:48.428414 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39c6c830_f77b_47f7_a874_02324d6c8c39.slice/crio-e5cc1510bbf0d31557f24e623b20f8c07ccabe6a37acaaa3850bbc9a59202c9b WatchSource:0}: Error finding container e5cc1510bbf0d31557f24e623b20f8c07ccabe6a37acaaa3850bbc9a59202c9b: Status 404 returned error can't find the container with id e5cc1510bbf0d31557f24e623b20f8c07ccabe6a37acaaa3850bbc9a59202c9b Jan 24 07:14:48 crc kubenswrapper[4675]: W0124 07:14:48.433858 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc3534ca_1196_47a7_889c_cead596f7636.slice/crio-c3d7107344b9836921068d336a5221f9cb286c5dd329b4c60c4a920f0a79e048 WatchSource:0}: Error finding container c3d7107344b9836921068d336a5221f9cb286c5dd329b4c60c4a920f0a79e048: Status 404 returned error can't find the container with id c3d7107344b9836921068d336a5221f9cb286c5dd329b4c60c4a920f0a79e048 Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.447301 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39c6c830-f77b-47f7-a874-02324d6c8c39","Type":"ContainerStarted","Data":"e5cc1510bbf0d31557f24e623b20f8c07ccabe6a37acaaa3850bbc9a59202c9b"} Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.455431 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9","Type":"ContainerStarted","Data":"7ed7eb9419f7e3c81f3bb1aa6d91f51b28305f96a5c0360a78d7280047efdd4a"} Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.455681 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.477165 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.477148078 podStartE2EDuration="2.477148078s" podCreationTimestamp="2026-01-24 07:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:48.467499495 +0000 UTC m=+1289.763604718" watchObservedRunningTime="2026-01-24 07:14:48.477148078 +0000 UTC m=+1289.773253301" Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.955986 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" path="/var/lib/kubelet/pods/422994aa-4835-4b8e-bc15-ea6e636ffa7f/volumes" Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.956819 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e802ff-b559-4ef9-9826-708faf39b488" path="/var/lib/kubelet/pods/d8e802ff-b559-4ef9-9826-708faf39b488/volumes" Jan 24 07:14:49 crc kubenswrapper[4675]: I0124 07:14:49.491362 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39c6c830-f77b-47f7-a874-02324d6c8c39","Type":"ContainerStarted","Data":"b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188"} Jan 24 07:14:49 crc kubenswrapper[4675]: I0124 07:14:49.497440 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc3534ca-1196-47a7-889c-cead596f7636","Type":"ContainerStarted","Data":"0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204"} Jan 24 07:14:49 crc kubenswrapper[4675]: I0124 07:14:49.497511 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc3534ca-1196-47a7-889c-cead596f7636","Type":"ContainerStarted","Data":"f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211"} Jan 24 07:14:49 crc kubenswrapper[4675]: I0124 07:14:49.497527 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc3534ca-1196-47a7-889c-cead596f7636","Type":"ContainerStarted","Data":"c3d7107344b9836921068d336a5221f9cb286c5dd329b4c60c4a920f0a79e048"} Jan 24 07:14:49 crc kubenswrapper[4675]: I0124 07:14:49.517547 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.517526875 podStartE2EDuration="2.517526875s" podCreationTimestamp="2026-01-24 07:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:49.513254241 +0000 UTC m=+1290.809359464" watchObservedRunningTime="2026-01-24 07:14:49.517526875 +0000 UTC m=+1290.813632098" Jan 24 07:14:49 crc kubenswrapper[4675]: I0124 07:14:49.549565 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.549545651 podStartE2EDuration="2.549545651s" podCreationTimestamp="2026-01-24 07:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:49.534402944 +0000 UTC m=+1290.830508167" watchObservedRunningTime="2026-01-24 07:14:49.549545651 +0000 UTC m=+1290.845650874" Jan 24 07:14:50 crc kubenswrapper[4675]: I0124 07:14:50.507980 4675 generic.go:334] "Generic (PLEG): container finished" podID="62846c05-d38a-49de-8303-468e98254357" containerID="80d35df00d2d5035adc3b9822734918a3159061668385f60be9f212c4e98eb41" exitCode=0 Jan 24 07:14:50 crc kubenswrapper[4675]: I0124 07:14:50.508049 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62846c05-d38a-49de-8303-468e98254357","Type":"ContainerDied","Data":"80d35df00d2d5035adc3b9822734918a3159061668385f60be9f212c4e98eb41"} Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.110139 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.180250 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-config-data\") pod \"62846c05-d38a-49de-8303-468e98254357\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.180386 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62846c05-d38a-49de-8303-468e98254357-logs\") pod \"62846c05-d38a-49de-8303-468e98254357\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.180453 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-combined-ca-bundle\") pod \"62846c05-d38a-49de-8303-468e98254357\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.180503 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67rnh\" (UniqueName: \"kubernetes.io/projected/62846c05-d38a-49de-8303-468e98254357-kube-api-access-67rnh\") pod \"62846c05-d38a-49de-8303-468e98254357\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.182254 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62846c05-d38a-49de-8303-468e98254357-logs" (OuterVolumeSpecName: "logs") pod "62846c05-d38a-49de-8303-468e98254357" (UID: "62846c05-d38a-49de-8303-468e98254357"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.210243 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62846c05-d38a-49de-8303-468e98254357-kube-api-access-67rnh" (OuterVolumeSpecName: "kube-api-access-67rnh") pod "62846c05-d38a-49de-8303-468e98254357" (UID: "62846c05-d38a-49de-8303-468e98254357"). InnerVolumeSpecName "kube-api-access-67rnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.222090 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62846c05-d38a-49de-8303-468e98254357" (UID: "62846c05-d38a-49de-8303-468e98254357"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.225783 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-config-data" (OuterVolumeSpecName: "config-data") pod "62846c05-d38a-49de-8303-468e98254357" (UID: "62846c05-d38a-49de-8303-468e98254357"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.282727 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.283038 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62846c05-d38a-49de-8303-468e98254357-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.283050 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.283064 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67rnh\" (UniqueName: \"kubernetes.io/projected/62846c05-d38a-49de-8303-468e98254357-kube-api-access-67rnh\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.518099 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62846c05-d38a-49de-8303-468e98254357","Type":"ContainerDied","Data":"95d83f46bb241df69cecd5d5ee865b1d6990f0719b8c49d6308bc5af4308f02a"} Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.518945 4675 scope.go:117] "RemoveContainer" containerID="80d35df00d2d5035adc3b9822734918a3159061668385f60be9f212c4e98eb41" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.518164 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.543685 4675 scope.go:117] "RemoveContainer" containerID="224184ae9f63569e4fa7815af3ba54297cb52818ef444f0c88b51bc4890bb311" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.557248 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.567891 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.578839 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:51 crc kubenswrapper[4675]: E0124 07:14:51.579353 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-log" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.579371 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-log" Jan 24 07:14:51 crc kubenswrapper[4675]: E0124 07:14:51.579392 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-api" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.579398 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-api" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.579563 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-log" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.579588 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-api" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.581600 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.589737 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.606764 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.693950 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fe021c-fb3a-41e9-a491-b3859b6748e6-logs\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.694264 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgxcx\" (UniqueName: \"kubernetes.io/projected/27fe021c-fb3a-41e9-a491-b3859b6748e6-kube-api-access-xgxcx\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.694311 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.694396 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-config-data\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.796327 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fe021c-fb3a-41e9-a491-b3859b6748e6-logs\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.796890 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fe021c-fb3a-41e9-a491-b3859b6748e6-logs\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.797432 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgxcx\" (UniqueName: \"kubernetes.io/projected/27fe021c-fb3a-41e9-a491-b3859b6748e6-kube-api-access-xgxcx\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.797881 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.798615 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-config-data\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.802606 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.802795 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="740dfadf-4d28-4f03-ab2c-cf51c7e078bf" containerName="kube-state-metrics" containerID="cri-o://4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058" gracePeriod=30 Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.806461 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.807175 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-config-data\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.820676 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgxcx\" (UniqueName: \"kubernetes.io/projected/27fe021c-fb3a-41e9-a491-b3859b6748e6-kube-api-access-xgxcx\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.901088 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.366605 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: W0124 07:14:52.466574 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27fe021c_fb3a_41e9_a491_b3859b6748e6.slice/crio-f8066edee7f8292eab8a240fbaa2f4738635a9541273589250ca9f84f41a04cb WatchSource:0}: Error finding container f8066edee7f8292eab8a240fbaa2f4738635a9541273589250ca9f84f41a04cb: Status 404 returned error can't find the container with id f8066edee7f8292eab8a240fbaa2f4738635a9541273589250ca9f84f41a04cb Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.467343 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.512871 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q67r2\" (UniqueName: \"kubernetes.io/projected/740dfadf-4d28-4f03-ab2c-cf51c7e078bf-kube-api-access-q67r2\") pod \"740dfadf-4d28-4f03-ab2c-cf51c7e078bf\" (UID: \"740dfadf-4d28-4f03-ab2c-cf51c7e078bf\") " Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.519509 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740dfadf-4d28-4f03-ab2c-cf51c7e078bf-kube-api-access-q67r2" (OuterVolumeSpecName: "kube-api-access-q67r2") pod "740dfadf-4d28-4f03-ab2c-cf51c7e078bf" (UID: "740dfadf-4d28-4f03-ab2c-cf51c7e078bf"). InnerVolumeSpecName "kube-api-access-q67r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.551732 4675 generic.go:334] "Generic (PLEG): container finished" podID="740dfadf-4d28-4f03-ab2c-cf51c7e078bf" containerID="4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058" exitCode=2 Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.551869 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.551932 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"740dfadf-4d28-4f03-ab2c-cf51c7e078bf","Type":"ContainerDied","Data":"4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058"} Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.551981 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"740dfadf-4d28-4f03-ab2c-cf51c7e078bf","Type":"ContainerDied","Data":"8fc4ca63f03726f8d4f4612fb16075bb874d642ea255530f0cde869af0c01186"} Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.551997 4675 scope.go:117] "RemoveContainer" containerID="4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.558586 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27fe021c-fb3a-41e9-a491-b3859b6748e6","Type":"ContainerStarted","Data":"f8066edee7f8292eab8a240fbaa2f4738635a9541273589250ca9f84f41a04cb"} Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.617640 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q67r2\" (UniqueName: \"kubernetes.io/projected/740dfadf-4d28-4f03-ab2c-cf51c7e078bf-kube-api-access-q67r2\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.618579 4675 scope.go:117] "RemoveContainer" containerID="4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058" Jan 24 07:14:52 crc kubenswrapper[4675]: E0124 07:14:52.620801 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058\": container with ID starting with 4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058 not found: ID does not exist" containerID="4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.620905 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058"} err="failed to get container status \"4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058\": rpc error: code = NotFound desc = could not find container \"4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058\": container with ID starting with 4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058 not found: ID does not exist" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.626048 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.647752 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.660131 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:14:52 crc kubenswrapper[4675]: E0124 07:14:52.660511 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740dfadf-4d28-4f03-ab2c-cf51c7e078bf" containerName="kube-state-metrics" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.660528 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="740dfadf-4d28-4f03-ab2c-cf51c7e078bf" containerName="kube-state-metrics" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.660707 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="740dfadf-4d28-4f03-ab2c-cf51c7e078bf" containerName="kube-state-metrics" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.661317 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.664419 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.664614 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.671185 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.821668 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.821794 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.821846 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.821940 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h9xg\" (UniqueName: \"kubernetes.io/projected/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-api-access-4h9xg\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.909761 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.923302 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h9xg\" (UniqueName: \"kubernetes.io/projected/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-api-access-4h9xg\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.923630 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.923799 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.923884 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.927987 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.928923 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.929071 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.929980 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.931604 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.952273 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h9xg\" (UniqueName: \"kubernetes.io/projected/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-api-access-4h9xg\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.962276 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62846c05-d38a-49de-8303-468e98254357" path="/var/lib/kubelet/pods/62846c05-d38a-49de-8303-468e98254357/volumes" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.963046 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="740dfadf-4d28-4f03-ab2c-cf51c7e078bf" path="/var/lib/kubelet/pods/740dfadf-4d28-4f03-ab2c-cf51c7e078bf/volumes" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.976054 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.257232 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:14:53 crc kubenswrapper[4675]: W0124 07:14:53.278932 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb742b344_80ea_48bf_bd28_8f1be00b4442.slice/crio-e4885528823a9c2db5a0e58475ac435677515f67119dd1a5bff22c18bcaa21bd WatchSource:0}: Error finding container e4885528823a9c2db5a0e58475ac435677515f67119dd1a5bff22c18bcaa21bd: Status 404 returned error can't find the container with id e4885528823a9c2db5a0e58475ac435677515f67119dd1a5bff22c18bcaa21bd Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.576926 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b742b344-80ea-48bf-bd28-8f1be00b4442","Type":"ContainerStarted","Data":"e4885528823a9c2db5a0e58475ac435677515f67119dd1a5bff22c18bcaa21bd"} Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.581335 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27fe021c-fb3a-41e9-a491-b3859b6748e6","Type":"ContainerStarted","Data":"47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3"} Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.581409 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27fe021c-fb3a-41e9-a491-b3859b6748e6","Type":"ContainerStarted","Data":"f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055"} Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.830273 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.830255094 podStartE2EDuration="2.830255094s" podCreationTimestamp="2026-01-24 07:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:53.609044713 +0000 UTC m=+1294.905149936" watchObservedRunningTime="2026-01-24 07:14:53.830255094 +0000 UTC m=+1295.126360307" Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.836063 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.836509 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-central-agent" containerID="cri-o://63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04" gracePeriod=30 Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.836559 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="proxy-httpd" containerID="cri-o://c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8" gracePeriod=30 Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.836622 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="sg-core" containerID="cri-o://d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462" gracePeriod=30 Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.836635 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-notification-agent" containerID="cri-o://15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9" gracePeriod=30 Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.148474 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.148880 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.591984 4675 generic.go:334] "Generic (PLEG): container finished" podID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerID="c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8" exitCode=0 Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.592022 4675 generic.go:334] "Generic (PLEG): container finished" podID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerID="d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462" exitCode=2 Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.592017 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerDied","Data":"c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8"} Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.592034 4675 generic.go:334] "Generic (PLEG): container finished" podID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerID="63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04" exitCode=0 Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.592056 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerDied","Data":"d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462"} Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.592071 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerDied","Data":"63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04"} Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.597803 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b742b344-80ea-48bf-bd28-8f1be00b4442","Type":"ContainerStarted","Data":"74c7bb11a8e80e07ac874f4ee0791a7d78cffd00427bd158904b53e1c98bfacd"} Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.597863 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.618168 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.223786872 podStartE2EDuration="2.618146541s" podCreationTimestamp="2026-01-24 07:14:52 +0000 UTC" firstStartedPulling="2026-01-24 07:14:53.284053506 +0000 UTC m=+1294.580158729" lastFinishedPulling="2026-01-24 07:14:53.678413185 +0000 UTC m=+1294.974518398" observedRunningTime="2026-01-24 07:14:54.61352068 +0000 UTC m=+1295.909625913" watchObservedRunningTime="2026-01-24 07:14:54.618146541 +0000 UTC m=+1295.914251774" Jan 24 07:14:55 crc kubenswrapper[4675]: I0124 07:14:55.962245 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092009 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-scripts\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092465 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-run-httpd\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092505 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-combined-ca-bundle\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092688 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-config-data\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092748 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxvrq\" (UniqueName: \"kubernetes.io/projected/c11da70b-e611-45b4-af1b-fe7ac3dacb85-kube-api-access-bxvrq\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092768 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-log-httpd\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092788 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-sg-core-conf-yaml\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092958 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.094258 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.096189 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.107184 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c11da70b-e611-45b4-af1b-fe7ac3dacb85-kube-api-access-bxvrq" (OuterVolumeSpecName: "kube-api-access-bxvrq") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "kube-api-access-bxvrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.123983 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-scripts" (OuterVolumeSpecName: "scripts") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.128354 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.183285 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.196208 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.196237 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxvrq\" (UniqueName: \"kubernetes.io/projected/c11da70b-e611-45b4-af1b-fe7ac3dacb85-kube-api-access-bxvrq\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.196250 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.196259 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.196268 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.201745 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-config-data" (OuterVolumeSpecName: "config-data") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.298355 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.616328 4675 generic.go:334] "Generic (PLEG): container finished" podID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerID="15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9" exitCode=0 Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.616370 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerDied","Data":"15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9"} Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.616405 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerDied","Data":"588244b07f5a60e0cabab824de73b0c1ab641046dedb0b1f0652661018ee56f9"} Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.616423 4675 scope.go:117] "RemoveContainer" containerID="c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.616562 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.657090 4675 scope.go:117] "RemoveContainer" containerID="d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.659806 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.679751 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.698404 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.698784 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="sg-core" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.698811 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="sg-core" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.698824 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="proxy-httpd" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.698830 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="proxy-httpd" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.698859 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-central-agent" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.698866 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-central-agent" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.698882 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-notification-agent" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.698888 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-notification-agent" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.699059 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-central-agent" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.699074 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-notification-agent" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.699083 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="sg-core" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.699095 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="proxy-httpd" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.700646 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.704088 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.704328 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.704487 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.711906 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.772355 4675 scope.go:117] "RemoveContainer" containerID="15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.804199 4675 scope.go:117] "RemoveContainer" containerID="63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.813866 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.813903 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-run-httpd\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.813931 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-config-data\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.813966 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-scripts\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.814032 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.814072 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xx66\" (UniqueName: \"kubernetes.io/projected/3f8195be-66ea-4e34-807c-1c8eae25ab81-kube-api-access-5xx66\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.814095 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.814118 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-log-httpd\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.824174 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.831672 4675 scope.go:117] "RemoveContainer" containerID="c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.833487 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8\": container with ID starting with c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8 not found: ID does not exist" containerID="c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.833545 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8"} err="failed to get container status \"c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8\": rpc error: code = NotFound desc = could not find container \"c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8\": container with ID starting with c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8 not found: ID does not exist" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.833581 4675 scope.go:117] "RemoveContainer" containerID="d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.834013 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462\": container with ID starting with d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462 not found: ID does not exist" containerID="d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.834077 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462"} err="failed to get container status \"d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462\": rpc error: code = NotFound desc = could not find container \"d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462\": container with ID starting with d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462 not found: ID does not exist" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.834126 4675 scope.go:117] "RemoveContainer" containerID="15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.834900 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9\": container with ID starting with 15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9 not found: ID does not exist" containerID="15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.834939 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9"} err="failed to get container status \"15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9\": rpc error: code = NotFound desc = could not find container \"15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9\": container with ID starting with 15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9 not found: ID does not exist" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.834993 4675 scope.go:117] "RemoveContainer" containerID="63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.835417 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04\": container with ID starting with 63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04 not found: ID does not exist" containerID="63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.835450 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04"} err="failed to get container status \"63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04\": rpc error: code = NotFound desc = could not find container \"63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04\": container with ID starting with 63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04 not found: ID does not exist" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915528 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915611 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xx66\" (UniqueName: \"kubernetes.io/projected/3f8195be-66ea-4e34-807c-1c8eae25ab81-kube-api-access-5xx66\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915635 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915660 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-log-httpd\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915776 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915793 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-run-httpd\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915816 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-config-data\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915849 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-scripts\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.916669 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-log-httpd\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.916769 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-run-httpd\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.923080 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.924074 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-scripts\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.924909 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-config-data\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.925408 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.933428 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.937231 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xx66\" (UniqueName: \"kubernetes.io/projected/3f8195be-66ea-4e34-807c-1c8eae25ab81-kube-api-access-5xx66\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.952172 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" path="/var/lib/kubelet/pods/c11da70b-e611-45b4-af1b-fe7ac3dacb85/volumes" Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.077410 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.536316 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:57 crc kubenswrapper[4675]: W0124 07:14:57.545790 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f8195be_66ea_4e34_807c_1c8eae25ab81.slice/crio-1a359bd29e289a4066ee4c4bf87970406ead8199cb47846908dabb10e863741e WatchSource:0}: Error finding container 1a359bd29e289a4066ee4c4bf87970406ead8199cb47846908dabb10e863741e: Status 404 returned error can't find the container with id 1a359bd29e289a4066ee4c4bf87970406ead8199cb47846908dabb10e863741e Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.548164 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.629365 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerStarted","Data":"1a359bd29e289a4066ee4c4bf87970406ead8199cb47846908dabb10e863741e"} Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.910144 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.929540 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.929598 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.962159 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.317579 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.445820 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw79p\" (UniqueName: \"kubernetes.io/projected/6462a086-070a-4998-8a59-cb4ccbf19867-kube-api-access-kw79p\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.445871 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-combined-ca-bundle\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.445898 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-secret-key\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.445930 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6462a086-070a-4998-8a59-cb4ccbf19867-logs\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.445965 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-scripts\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.446027 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-tls-certs\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.446058 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-config-data\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.447098 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6462a086-070a-4998-8a59-cb4ccbf19867-logs" (OuterVolumeSpecName: "logs") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.462086 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.475734 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6462a086-070a-4998-8a59-cb4ccbf19867-kube-api-access-kw79p" (OuterVolumeSpecName: "kube-api-access-kw79p") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "kube-api-access-kw79p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.491900 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-scripts" (OuterVolumeSpecName: "scripts") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.496560 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-config-data" (OuterVolumeSpecName: "config-data") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.513352 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.522244 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549787 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549815 4675 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549826 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549837 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw79p\" (UniqueName: \"kubernetes.io/projected/6462a086-070a-4998-8a59-cb4ccbf19867-kube-api-access-kw79p\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549846 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549855 4675 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549862 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6462a086-070a-4998-8a59-cb4ccbf19867-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.645530 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerStarted","Data":"06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914"} Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.656017 4675 generic.go:334] "Generic (PLEG): container finished" podID="6462a086-070a-4998-8a59-cb4ccbf19867" containerID="1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50" exitCode=137 Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.656090 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.656175 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6565db7666-dt2lk" event={"ID":"6462a086-070a-4998-8a59-cb4ccbf19867","Type":"ContainerDied","Data":"1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50"} Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.656213 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6565db7666-dt2lk" event={"ID":"6462a086-070a-4998-8a59-cb4ccbf19867","Type":"ContainerDied","Data":"d950b238b60f1812543dfb4f7f5294f5560f40c993673b23b13c0d2609edbe30"} Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.656234 4675 scope.go:117] "RemoveContainer" containerID="32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.734327 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.910359 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6565db7666-dt2lk"] Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.933604 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6565db7666-dt2lk"] Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.951040 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.951162 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.963177 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" path="/var/lib/kubelet/pods/6462a086-070a-4998-8a59-cb4ccbf19867/volumes" Jan 24 07:14:59 crc kubenswrapper[4675]: I0124 07:14:59.021569 4675 scope.go:117] "RemoveContainer" containerID="1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50" Jan 24 07:14:59 crc kubenswrapper[4675]: I0124 07:14:59.042999 4675 scope.go:117] "RemoveContainer" containerID="32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3" Jan 24 07:14:59 crc kubenswrapper[4675]: E0124 07:14:59.043547 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3\": container with ID starting with 32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3 not found: ID does not exist" containerID="32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3" Jan 24 07:14:59 crc kubenswrapper[4675]: I0124 07:14:59.043578 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3"} err="failed to get container status \"32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3\": rpc error: code = NotFound desc = could not find container \"32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3\": container with ID starting with 32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3 not found: ID does not exist" Jan 24 07:14:59 crc kubenswrapper[4675]: I0124 07:14:59.043598 4675 scope.go:117] "RemoveContainer" containerID="1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50" Jan 24 07:14:59 crc kubenswrapper[4675]: E0124 07:14:59.044074 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50\": container with ID starting with 1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50 not found: ID does not exist" containerID="1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50" Jan 24 07:14:59 crc kubenswrapper[4675]: I0124 07:14:59.044113 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50"} err="failed to get container status \"1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50\": rpc error: code = NotFound desc = could not find container \"1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50\": container with ID starting with 1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50 not found: ID does not exist" Jan 24 07:14:59 crc kubenswrapper[4675]: I0124 07:14:59.672848 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerStarted","Data":"d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec"} Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.314093 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz"] Jan 24 07:15:00 crc kubenswrapper[4675]: E0124 07:15:00.314501 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.314522 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" Jan 24 07:15:00 crc kubenswrapper[4675]: E0124 07:15:00.314551 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon-log" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.314560 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon-log" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.314751 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon-log" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.314773 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.315314 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.318552 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.318747 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.337695 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz"] Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.380620 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/992bc9f8-4adf-4940-95d5-942895a4d935-config-volume\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.380694 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/992bc9f8-4adf-4940-95d5-942895a4d935-secret-volume\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.380760 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7wpt\" (UniqueName: \"kubernetes.io/projected/992bc9f8-4adf-4940-95d5-942895a4d935-kube-api-access-x7wpt\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.482230 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/992bc9f8-4adf-4940-95d5-942895a4d935-config-volume\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.482300 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/992bc9f8-4adf-4940-95d5-942895a4d935-secret-volume\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.482334 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7wpt\" (UniqueName: \"kubernetes.io/projected/992bc9f8-4adf-4940-95d5-942895a4d935-kube-api-access-x7wpt\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.483107 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/992bc9f8-4adf-4940-95d5-942895a4d935-config-volume\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.487392 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/992bc9f8-4adf-4940-95d5-942895a4d935-secret-volume\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.497126 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7wpt\" (UniqueName: \"kubernetes.io/projected/992bc9f8-4adf-4940-95d5-942895a4d935-kube-api-access-x7wpt\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.642897 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.700171 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerStarted","Data":"e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c"} Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.152647 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz"] Jan 24 07:15:01 crc kubenswrapper[4675]: W0124 07:15:01.160181 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod992bc9f8_4adf_4940_95d5_942895a4d935.slice/crio-0fa6ea80827798d6b17c9a2c7d23da940c673bc9dc856f8e3d6e5e31e6a0562c WatchSource:0}: Error finding container 0fa6ea80827798d6b17c9a2c7d23da940c673bc9dc856f8e3d6e5e31e6a0562c: Status 404 returned error can't find the container with id 0fa6ea80827798d6b17c9a2c7d23da940c673bc9dc856f8e3d6e5e31e6a0562c Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.713257 4675 generic.go:334] "Generic (PLEG): container finished" podID="992bc9f8-4adf-4940-95d5-942895a4d935" containerID="4ceca7bb4c3f8f330a726083a805861d2285d706134fb31908c2ce567855cf82" exitCode=0 Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.713368 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" event={"ID":"992bc9f8-4adf-4940-95d5-942895a4d935","Type":"ContainerDied","Data":"4ceca7bb4c3f8f330a726083a805861d2285d706134fb31908c2ce567855cf82"} Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.713535 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" event={"ID":"992bc9f8-4adf-4940-95d5-942895a4d935","Type":"ContainerStarted","Data":"0fa6ea80827798d6b17c9a2c7d23da940c673bc9dc856f8e3d6e5e31e6a0562c"} Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.719638 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerStarted","Data":"d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f"} Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.720490 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.792196 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3503014589999998 podStartE2EDuration="5.792176232s" podCreationTimestamp="2026-01-24 07:14:56 +0000 UTC" firstStartedPulling="2026-01-24 07:14:57.547904541 +0000 UTC m=+1298.844009764" lastFinishedPulling="2026-01-24 07:15:00.989779314 +0000 UTC m=+1302.285884537" observedRunningTime="2026-01-24 07:15:01.788163125 +0000 UTC m=+1303.084268358" watchObservedRunningTime="2026-01-24 07:15:01.792176232 +0000 UTC m=+1303.088281455" Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.901982 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.902048 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 07:15:02 crc kubenswrapper[4675]: I0124 07:15:02.985909 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 07:15:02 crc kubenswrapper[4675]: I0124 07:15:02.986048 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.001878 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.131614 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.250903 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/992bc9f8-4adf-4940-95d5-942895a4d935-secret-volume\") pod \"992bc9f8-4adf-4940-95d5-942895a4d935\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.251413 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7wpt\" (UniqueName: \"kubernetes.io/projected/992bc9f8-4adf-4940-95d5-942895a4d935-kube-api-access-x7wpt\") pod \"992bc9f8-4adf-4940-95d5-942895a4d935\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.251466 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/992bc9f8-4adf-4940-95d5-942895a4d935-config-volume\") pod \"992bc9f8-4adf-4940-95d5-942895a4d935\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.252376 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/992bc9f8-4adf-4940-95d5-942895a4d935-config-volume" (OuterVolumeSpecName: "config-volume") pod "992bc9f8-4adf-4940-95d5-942895a4d935" (UID: "992bc9f8-4adf-4940-95d5-942895a4d935"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.256891 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992bc9f8-4adf-4940-95d5-942895a4d935-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "992bc9f8-4adf-4940-95d5-942895a4d935" (UID: "992bc9f8-4adf-4940-95d5-942895a4d935"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.257134 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/992bc9f8-4adf-4940-95d5-942895a4d935-kube-api-access-x7wpt" (OuterVolumeSpecName: "kube-api-access-x7wpt") pod "992bc9f8-4adf-4940-95d5-942895a4d935" (UID: "992bc9f8-4adf-4940-95d5-942895a4d935"). InnerVolumeSpecName "kube-api-access-x7wpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.353478 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7wpt\" (UniqueName: \"kubernetes.io/projected/992bc9f8-4adf-4940-95d5-942895a4d935-kube-api-access-x7wpt\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.353514 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/992bc9f8-4adf-4940-95d5-942895a4d935-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.353528 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/992bc9f8-4adf-4940-95d5-942895a4d935-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.742779 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" event={"ID":"992bc9f8-4adf-4940-95d5-942895a4d935","Type":"ContainerDied","Data":"0fa6ea80827798d6b17c9a2c7d23da940c673bc9dc856f8e3d6e5e31e6a0562c"} Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.742823 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fa6ea80827798d6b17c9a2c7d23da940c673bc9dc856f8e3d6e5e31e6a0562c" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.743156 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:07 crc kubenswrapper[4675]: I0124 07:15:07.935473 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 07:15:07 crc kubenswrapper[4675]: I0124 07:15:07.938111 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 07:15:07 crc kubenswrapper[4675]: I0124 07:15:07.951072 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 07:15:08 crc kubenswrapper[4675]: I0124 07:15:08.629923 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:15:08 crc kubenswrapper[4675]: I0124 07:15:08.629988 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:15:08 crc kubenswrapper[4675]: I0124 07:15:08.790854 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.705624 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.776091 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-combined-ca-bundle\") pod \"a382715e-bef1-47d2-872f-21ffbda9df32\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.776230 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-config-data\") pod \"a382715e-bef1-47d2-872f-21ffbda9df32\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.776263 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbjjc\" (UniqueName: \"kubernetes.io/projected/a382715e-bef1-47d2-872f-21ffbda9df32-kube-api-access-nbjjc\") pod \"a382715e-bef1-47d2-872f-21ffbda9df32\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.782198 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a382715e-bef1-47d2-872f-21ffbda9df32-kube-api-access-nbjjc" (OuterVolumeSpecName: "kube-api-access-nbjjc") pod "a382715e-bef1-47d2-872f-21ffbda9df32" (UID: "a382715e-bef1-47d2-872f-21ffbda9df32"). InnerVolumeSpecName "kube-api-access-nbjjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.800072 4675 generic.go:334] "Generic (PLEG): container finished" podID="a382715e-bef1-47d2-872f-21ffbda9df32" containerID="f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a" exitCode=137 Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.801886 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.802101 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a382715e-bef1-47d2-872f-21ffbda9df32","Type":"ContainerDied","Data":"f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a"} Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.802164 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a382715e-bef1-47d2-872f-21ffbda9df32","Type":"ContainerDied","Data":"738037063e3609a68332a4891fdfb34e8c71d23849dea8ebc3779041066480cc"} Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.802188 4675 scope.go:117] "RemoveContainer" containerID="f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.809536 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a382715e-bef1-47d2-872f-21ffbda9df32" (UID: "a382715e-bef1-47d2-872f-21ffbda9df32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.810881 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-config-data" (OuterVolumeSpecName: "config-data") pod "a382715e-bef1-47d2-872f-21ffbda9df32" (UID: "a382715e-bef1-47d2-872f-21ffbda9df32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.878522 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.879021 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbjjc\" (UniqueName: \"kubernetes.io/projected/a382715e-bef1-47d2-872f-21ffbda9df32-kube-api-access-nbjjc\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.879115 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.883025 4675 scope.go:117] "RemoveContainer" containerID="f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a" Jan 24 07:15:09 crc kubenswrapper[4675]: E0124 07:15:09.883521 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a\": container with ID starting with f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a not found: ID does not exist" containerID="f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.883553 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a"} err="failed to get container status \"f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a\": rpc error: code = NotFound desc = could not find container \"f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a\": container with ID starting with f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a not found: ID does not exist" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.141499 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.152074 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.177921 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:15:10 crc kubenswrapper[4675]: E0124 07:15:10.178366 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a382715e-bef1-47d2-872f-21ffbda9df32" containerName="nova-cell1-novncproxy-novncproxy" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.178387 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a382715e-bef1-47d2-872f-21ffbda9df32" containerName="nova-cell1-novncproxy-novncproxy" Jan 24 07:15:10 crc kubenswrapper[4675]: E0124 07:15:10.178429 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992bc9f8-4adf-4940-95d5-942895a4d935" containerName="collect-profiles" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.178437 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="992bc9f8-4adf-4940-95d5-942895a4d935" containerName="collect-profiles" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.178635 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a382715e-bef1-47d2-872f-21ffbda9df32" containerName="nova-cell1-novncproxy-novncproxy" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.178664 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="992bc9f8-4adf-4940-95d5-942895a4d935" containerName="collect-profiles" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.179432 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.185568 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.185843 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.185993 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.198005 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.286691 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.286813 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfx94\" (UniqueName: \"kubernetes.io/projected/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-kube-api-access-qfx94\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.286856 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.286899 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.287038 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.389109 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.389290 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.389329 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfx94\" (UniqueName: \"kubernetes.io/projected/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-kube-api-access-qfx94\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.389358 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.389391 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.393435 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.393629 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.394896 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.400633 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.406849 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfx94\" (UniqueName: \"kubernetes.io/projected/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-kube-api-access-qfx94\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.510214 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.956565 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a382715e-bef1-47d2-872f-21ffbda9df32" path="/var/lib/kubelet/pods/a382715e-bef1-47d2-872f-21ffbda9df32/volumes" Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.012488 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:15:11 crc kubenswrapper[4675]: W0124 07:15:11.016970 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda485ae65_6b4d_4cc6_9623_dc0b722f47e8.slice/crio-32d8a7e6bdb666f46eefbbddbd3ffdd7479ee88dcb47323f8514cacec900727d WatchSource:0}: Error finding container 32d8a7e6bdb666f46eefbbddbd3ffdd7479ee88dcb47323f8514cacec900727d: Status 404 returned error can't find the container with id 32d8a7e6bdb666f46eefbbddbd3ffdd7479ee88dcb47323f8514cacec900727d Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.830938 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a485ae65-6b4d-4cc6-9623-dc0b722f47e8","Type":"ContainerStarted","Data":"0b0de0d2f11ca21b621155c6914f60384a4f54529e763fda6e9631401b8bc8e8"} Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.831002 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a485ae65-6b4d-4cc6-9623-dc0b722f47e8","Type":"ContainerStarted","Data":"32d8a7e6bdb666f46eefbbddbd3ffdd7479ee88dcb47323f8514cacec900727d"} Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.854539 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.8545182 podStartE2EDuration="1.8545182s" podCreationTimestamp="2026-01-24 07:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:11.849045497 +0000 UTC m=+1313.145150740" watchObservedRunningTime="2026-01-24 07:15:11.8545182 +0000 UTC m=+1313.150623433" Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.905537 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.906235 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.908335 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.916680 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 07:15:12 crc kubenswrapper[4675]: I0124 07:15:12.840285 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 07:15:12 crc kubenswrapper[4675]: I0124 07:15:12.844648 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.060384 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-2vwtf"] Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.062316 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.086939 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-2vwtf"] Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.159041 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.159156 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.159211 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.159255 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5l2t\" (UniqueName: \"kubernetes.io/projected/bc9f2853-f671-4647-81df-50314ca5e8a1-kube-api-access-f5l2t\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.159574 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-config\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.159771 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.260968 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-config\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.261026 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.261083 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.261132 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.261165 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.261192 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5l2t\" (UniqueName: \"kubernetes.io/projected/bc9f2853-f671-4647-81df-50314ca5e8a1-kube-api-access-f5l2t\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.262174 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-config\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.262672 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.263213 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.263887 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.264525 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.288657 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5l2t\" (UniqueName: \"kubernetes.io/projected/bc9f2853-f671-4647-81df-50314ca5e8a1-kube-api-access-f5l2t\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.390658 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.903542 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-2vwtf"] Jan 24 07:15:14 crc kubenswrapper[4675]: I0124 07:15:14.857573 4675 generic.go:334] "Generic (PLEG): container finished" podID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerID="e61aa274860730298b3d466a37bbb7b9f9970e99b80ea9db136fc24849710d8e" exitCode=0 Jan 24 07:15:14 crc kubenswrapper[4675]: I0124 07:15:14.857645 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" event={"ID":"bc9f2853-f671-4647-81df-50314ca5e8a1","Type":"ContainerDied","Data":"e61aa274860730298b3d466a37bbb7b9f9970e99b80ea9db136fc24849710d8e"} Jan 24 07:15:14 crc kubenswrapper[4675]: I0124 07:15:14.858964 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" event={"ID":"bc9f2853-f671-4647-81df-50314ca5e8a1","Type":"ContainerStarted","Data":"2c3c2a43e5e1f891dc078496766b3dbc527e0916a446f16d71f3f1e737ccce2c"} Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.257178 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.257625 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-central-agent" containerID="cri-o://06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914" gracePeriod=30 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.258397 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="sg-core" containerID="cri-o://e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c" gracePeriod=30 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.258533 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="proxy-httpd" containerID="cri-o://d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f" gracePeriod=30 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.258602 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-notification-agent" containerID="cri-o://d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec" gracePeriod=30 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.340880 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.341815 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.510874 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.869156 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" event={"ID":"bc9f2853-f671-4647-81df-50314ca5e8a1","Type":"ContainerStarted","Data":"0ab4fa2df75231345106926fff79be99c6a1cf266a2f4e1ca9da801dcc25d480"} Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.870216 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873467 4675 generic.go:334] "Generic (PLEG): container finished" podID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerID="d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f" exitCode=0 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873491 4675 generic.go:334] "Generic (PLEG): container finished" podID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerID="e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c" exitCode=2 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873499 4675 generic.go:334] "Generic (PLEG): container finished" podID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerID="06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914" exitCode=0 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873698 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-log" containerID="cri-o://f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055" gracePeriod=30 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873930 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerDied","Data":"d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f"} Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873966 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerDied","Data":"e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c"} Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873980 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerDied","Data":"06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914"} Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.874047 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-api" containerID="cri-o://47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3" gracePeriod=30 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.903469 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" podStartSLOduration=2.903447606 podStartE2EDuration="2.903447606s" podCreationTimestamp="2026-01-24 07:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:15.896744073 +0000 UTC m=+1317.192849336" watchObservedRunningTime="2026-01-24 07:15:15.903447606 +0000 UTC m=+1317.199552829" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.574102 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.628813 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-run-httpd\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.628885 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-sg-core-conf-yaml\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.628917 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-config-data\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.628980 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-scripts\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.629050 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-log-httpd\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.629071 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-combined-ca-bundle\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.629129 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-ceilometer-tls-certs\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.629216 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xx66\" (UniqueName: \"kubernetes.io/projected/3f8195be-66ea-4e34-807c-1c8eae25ab81-kube-api-access-5xx66\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.629489 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.629813 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.630095 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.652059 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f8195be-66ea-4e34-807c-1c8eae25ab81-kube-api-access-5xx66" (OuterVolumeSpecName: "kube-api-access-5xx66") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "kube-api-access-5xx66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.653427 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-scripts" (OuterVolumeSpecName: "scripts") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.694117 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.733363 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.733392 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.733402 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xx66\" (UniqueName: \"kubernetes.io/projected/3f8195be-66ea-4e34-807c-1c8eae25ab81-kube-api-access-5xx66\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.733411 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.802201 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.824441 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.832809 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-config-data" (OuterVolumeSpecName: "config-data") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.835853 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.835873 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.835886 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.884537 4675 generic.go:334] "Generic (PLEG): container finished" podID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerID="d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec" exitCode=0 Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.884939 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.884914 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerDied","Data":"d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec"} Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.885697 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerDied","Data":"1a359bd29e289a4066ee4c4bf87970406ead8199cb47846908dabb10e863741e"} Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.885789 4675 scope.go:117] "RemoveContainer" containerID="d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.887156 4675 generic.go:334] "Generic (PLEG): container finished" podID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerID="f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055" exitCode=143 Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.887758 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27fe021c-fb3a-41e9-a491-b3859b6748e6","Type":"ContainerDied","Data":"f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055"} Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.910670 4675 scope.go:117] "RemoveContainer" containerID="e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.923533 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.931254 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.938547 4675 scope.go:117] "RemoveContainer" containerID="d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.957706 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" path="/var/lib/kubelet/pods/3f8195be-66ea-4e34-807c-1c8eae25ab81/volumes" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.958560 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:16 crc kubenswrapper[4675]: E0124 07:15:16.958873 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-central-agent" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.958888 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-central-agent" Jan 24 07:15:16 crc kubenswrapper[4675]: E0124 07:15:16.958907 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-notification-agent" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.958914 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-notification-agent" Jan 24 07:15:16 crc kubenswrapper[4675]: E0124 07:15:16.958926 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="sg-core" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.958931 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="sg-core" Jan 24 07:15:16 crc kubenswrapper[4675]: E0124 07:15:16.958953 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="proxy-httpd" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.958959 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="proxy-httpd" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.959115 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-central-agent" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.959132 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="sg-core" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.959145 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-notification-agent" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.959157 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="proxy-httpd" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.960957 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.961080 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.967634 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.967779 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.967856 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.989400 4675 scope.go:117] "RemoveContainer" containerID="06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.024649 4675 scope.go:117] "RemoveContainer" containerID="d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f" Jan 24 07:15:17 crc kubenswrapper[4675]: E0124 07:15:17.025123 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f\": container with ID starting with d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f not found: ID does not exist" containerID="d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.025167 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f"} err="failed to get container status \"d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f\": rpc error: code = NotFound desc = could not find container \"d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f\": container with ID starting with d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f not found: ID does not exist" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.025193 4675 scope.go:117] "RemoveContainer" containerID="e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c" Jan 24 07:15:17 crc kubenswrapper[4675]: E0124 07:15:17.025511 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c\": container with ID starting with e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c not found: ID does not exist" containerID="e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.025547 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c"} err="failed to get container status \"e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c\": rpc error: code = NotFound desc = could not find container \"e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c\": container with ID starting with e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c not found: ID does not exist" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.025575 4675 scope.go:117] "RemoveContainer" containerID="d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec" Jan 24 07:15:17 crc kubenswrapper[4675]: E0124 07:15:17.025928 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec\": container with ID starting with d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec not found: ID does not exist" containerID="d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.025952 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec"} err="failed to get container status \"d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec\": rpc error: code = NotFound desc = could not find container \"d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec\": container with ID starting with d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec not found: ID does not exist" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.025965 4675 scope.go:117] "RemoveContainer" containerID="06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914" Jan 24 07:15:17 crc kubenswrapper[4675]: E0124 07:15:17.026181 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914\": container with ID starting with 06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914 not found: ID does not exist" containerID="06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.026208 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914"} err="failed to get container status \"06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914\": rpc error: code = NotFound desc = could not find container \"06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914\": container with ID starting with 06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914 not found: ID does not exist" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.040651 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-scripts\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.040855 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sbn5\" (UniqueName: \"kubernetes.io/projected/fe8d76d6-1c10-4489-8dd7-913259f97b21-kube-api-access-8sbn5\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.040889 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.040927 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-config-data\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.040984 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-log-httpd\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.041035 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-run-httpd\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.041071 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.041095 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.142940 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-log-httpd\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143019 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-run-httpd\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143058 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143092 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143129 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-scripts\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143180 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sbn5\" (UniqueName: \"kubernetes.io/projected/fe8d76d6-1c10-4489-8dd7-913259f97b21-kube-api-access-8sbn5\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143214 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143251 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-config-data\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.144321 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-run-httpd\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.144369 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-log-httpd\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.148270 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.148272 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-scripts\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.148401 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.148600 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-config-data\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.157228 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.163627 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sbn5\" (UniqueName: \"kubernetes.io/projected/fe8d76d6-1c10-4489-8dd7-913259f97b21-kube-api-access-8sbn5\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.281121 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.594643 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.828439 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.920545 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerStarted","Data":"184d1377ee27c4e0aa9c78bfe778ae0e24d8f487e027e7a4ab2ff93d1556f7a5"} Jan 24 07:15:18 crc kubenswrapper[4675]: I0124 07:15:18.936619 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerStarted","Data":"74fe895fd181a36cae7a5cb232607ff9ad42b50773c595b8edf5ca1fd42d6a6c"} Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.424057 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.601878 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fe021c-fb3a-41e9-a491-b3859b6748e6-logs\") pod \"27fe021c-fb3a-41e9-a491-b3859b6748e6\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.601976 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-combined-ca-bundle\") pod \"27fe021c-fb3a-41e9-a491-b3859b6748e6\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.602186 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-config-data\") pod \"27fe021c-fb3a-41e9-a491-b3859b6748e6\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.602216 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgxcx\" (UniqueName: \"kubernetes.io/projected/27fe021c-fb3a-41e9-a491-b3859b6748e6-kube-api-access-xgxcx\") pod \"27fe021c-fb3a-41e9-a491-b3859b6748e6\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.602270 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27fe021c-fb3a-41e9-a491-b3859b6748e6-logs" (OuterVolumeSpecName: "logs") pod "27fe021c-fb3a-41e9-a491-b3859b6748e6" (UID: "27fe021c-fb3a-41e9-a491-b3859b6748e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.602599 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fe021c-fb3a-41e9-a491-b3859b6748e6-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.613145 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27fe021c-fb3a-41e9-a491-b3859b6748e6-kube-api-access-xgxcx" (OuterVolumeSpecName: "kube-api-access-xgxcx") pod "27fe021c-fb3a-41e9-a491-b3859b6748e6" (UID: "27fe021c-fb3a-41e9-a491-b3859b6748e6"). InnerVolumeSpecName "kube-api-access-xgxcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.662305 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-config-data" (OuterVolumeSpecName: "config-data") pod "27fe021c-fb3a-41e9-a491-b3859b6748e6" (UID: "27fe021c-fb3a-41e9-a491-b3859b6748e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.672898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27fe021c-fb3a-41e9-a491-b3859b6748e6" (UID: "27fe021c-fb3a-41e9-a491-b3859b6748e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.705645 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.705671 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.705681 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgxcx\" (UniqueName: \"kubernetes.io/projected/27fe021c-fb3a-41e9-a491-b3859b6748e6-kube-api-access-xgxcx\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.953278 4675 generic.go:334] "Generic (PLEG): container finished" podID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerID="47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3" exitCode=0 Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.953518 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27fe021c-fb3a-41e9-a491-b3859b6748e6","Type":"ContainerDied","Data":"47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3"} Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.953543 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27fe021c-fb3a-41e9-a491-b3859b6748e6","Type":"ContainerDied","Data":"f8066edee7f8292eab8a240fbaa2f4738635a9541273589250ca9f84f41a04cb"} Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.953559 4675 scope.go:117] "RemoveContainer" containerID="47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.953670 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.957798 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerStarted","Data":"1abc59d24561abccf1767872a358579cf873d1091e5836e14169cd76c0cd3aab"} Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.979590 4675 scope.go:117] "RemoveContainer" containerID="f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.989221 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.001071 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.015505 4675 scope.go:117] "RemoveContainer" containerID="47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.026743 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:20 crc kubenswrapper[4675]: E0124 07:15:20.027082 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-log" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.027099 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-log" Jan 24 07:15:20 crc kubenswrapper[4675]: E0124 07:15:20.027143 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-api" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.027267 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-api" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.027439 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-api" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.027460 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-log" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.028973 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.034640 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.034850 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.035216 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 07:15:20 crc kubenswrapper[4675]: E0124 07:15:20.037597 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3\": container with ID starting with 47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3 not found: ID does not exist" containerID="47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.037653 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3"} err="failed to get container status \"47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3\": rpc error: code = NotFound desc = could not find container \"47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3\": container with ID starting with 47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3 not found: ID does not exist" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.037688 4675 scope.go:117] "RemoveContainer" containerID="f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055" Jan 24 07:15:20 crc kubenswrapper[4675]: E0124 07:15:20.040789 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055\": container with ID starting with f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055 not found: ID does not exist" containerID="f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.040833 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055"} err="failed to get container status \"f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055\": rpc error: code = NotFound desc = could not find container \"f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055\": container with ID starting with f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055 not found: ID does not exist" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.055425 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.213029 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.213669 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.213793 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpp7j\" (UniqueName: \"kubernetes.io/projected/ce57208f-54cc-491b-b898-ba4fddd26d3c-kube-api-access-wpp7j\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.213852 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce57208f-54cc-491b-b898-ba4fddd26d3c-logs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.213880 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.214003 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-config-data\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.316165 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpp7j\" (UniqueName: \"kubernetes.io/projected/ce57208f-54cc-491b-b898-ba4fddd26d3c-kube-api-access-wpp7j\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.316513 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce57208f-54cc-491b-b898-ba4fddd26d3c-logs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.316660 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.316920 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-config-data\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.317057 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.317211 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.317012 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce57208f-54cc-491b-b898-ba4fddd26d3c-logs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.320739 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-config-data\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.320953 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.323085 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.323286 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.335296 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpp7j\" (UniqueName: \"kubernetes.io/projected/ce57208f-54cc-491b-b898-ba4fddd26d3c-kube-api-access-wpp7j\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.354348 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.517707 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.545604 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:20 crc kubenswrapper[4675]: W0124 07:15:20.879687 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce57208f_54cc_491b_b898_ba4fddd26d3c.slice/crio-15e1570d93e869aa39d375fd2b3930f128f62e919f69a8e3b2a153bcd883ebee WatchSource:0}: Error finding container 15e1570d93e869aa39d375fd2b3930f128f62e919f69a8e3b2a153bcd883ebee: Status 404 returned error can't find the container with id 15e1570d93e869aa39d375fd2b3930f128f62e919f69a8e3b2a153bcd883ebee Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.901961 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.955973 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" path="/var/lib/kubelet/pods/27fe021c-fb3a-41e9-a491-b3859b6748e6/volumes" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.966661 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce57208f-54cc-491b-b898-ba4fddd26d3c","Type":"ContainerStarted","Data":"15e1570d93e869aa39d375fd2b3930f128f62e919f69a8e3b2a153bcd883ebee"} Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.969114 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerStarted","Data":"bda3449e20a1024a10e072c7fb1aeedc4fbba8ed22e5ecb699bd653d3b26dffc"} Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.986885 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.281245 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-dxv2k"] Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.282908 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.299460 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.300086 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.304558 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dxv2k"] Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.367441 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-config-data\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.367615 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-scripts\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.367664 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.367708 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkf5n\" (UniqueName: \"kubernetes.io/projected/cd0aa104-48a4-4eab-afcc-2ef03d860551-kube-api-access-pkf5n\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.469423 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.469483 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkf5n\" (UniqueName: \"kubernetes.io/projected/cd0aa104-48a4-4eab-afcc-2ef03d860551-kube-api-access-pkf5n\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.469585 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-config-data\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.469630 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-scripts\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.473165 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-scripts\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.477336 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-config-data\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.478371 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.485037 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkf5n\" (UniqueName: \"kubernetes.io/projected/cd0aa104-48a4-4eab-afcc-2ef03d860551-kube-api-access-pkf5n\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.612632 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.980451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce57208f-54cc-491b-b898-ba4fddd26d3c","Type":"ContainerStarted","Data":"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722"} Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.980771 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce57208f-54cc-491b-b898-ba4fddd26d3c","Type":"ContainerStarted","Data":"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc"} Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.984423 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-central-agent" containerID="cri-o://74fe895fd181a36cae7a5cb232607ff9ad42b50773c595b8edf5ca1fd42d6a6c" gracePeriod=30 Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.984521 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="proxy-httpd" containerID="cri-o://e22826235f6c0a5f7fcb14945a7c72821a350b6b9d34c9fed1ac60a55ae29e56" gracePeriod=30 Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.984558 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="sg-core" containerID="cri-o://bda3449e20a1024a10e072c7fb1aeedc4fbba8ed22e5ecb699bd653d3b26dffc" gracePeriod=30 Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.984588 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-notification-agent" containerID="cri-o://1abc59d24561abccf1767872a358579cf873d1091e5836e14169cd76c0cd3aab" gracePeriod=30 Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.984776 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerStarted","Data":"e22826235f6c0a5f7fcb14945a7c72821a350b6b9d34c9fed1ac60a55ae29e56"} Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.984797 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:15:22 crc kubenswrapper[4675]: I0124 07:15:22.009036 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.00901277 podStartE2EDuration="2.00901277s" podCreationTimestamp="2026-01-24 07:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:22.00198195 +0000 UTC m=+1323.298087183" watchObservedRunningTime="2026-01-24 07:15:22.00901277 +0000 UTC m=+1323.305117993" Jan 24 07:15:22 crc kubenswrapper[4675]: I0124 07:15:22.199658 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.612290511 podStartE2EDuration="6.19962938s" podCreationTimestamp="2026-01-24 07:15:16 +0000 UTC" firstStartedPulling="2026-01-24 07:15:17.841491099 +0000 UTC m=+1319.137596322" lastFinishedPulling="2026-01-24 07:15:21.428829968 +0000 UTC m=+1322.724935191" observedRunningTime="2026-01-24 07:15:22.034203091 +0000 UTC m=+1323.330308304" watchObservedRunningTime="2026-01-24 07:15:22.19962938 +0000 UTC m=+1323.495734603" Jan 24 07:15:22 crc kubenswrapper[4675]: I0124 07:15:22.202938 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dxv2k"] Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.002686 4675 generic.go:334] "Generic (PLEG): container finished" podID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerID="e22826235f6c0a5f7fcb14945a7c72821a350b6b9d34c9fed1ac60a55ae29e56" exitCode=0 Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.002986 4675 generic.go:334] "Generic (PLEG): container finished" podID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerID="bda3449e20a1024a10e072c7fb1aeedc4fbba8ed22e5ecb699bd653d3b26dffc" exitCode=2 Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.002999 4675 generic.go:334] "Generic (PLEG): container finished" podID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerID="1abc59d24561abccf1767872a358579cf873d1091e5836e14169cd76c0cd3aab" exitCode=0 Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.002752 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerDied","Data":"e22826235f6c0a5f7fcb14945a7c72821a350b6b9d34c9fed1ac60a55ae29e56"} Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.003059 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerDied","Data":"bda3449e20a1024a10e072c7fb1aeedc4fbba8ed22e5ecb699bd653d3b26dffc"} Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.003073 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerDied","Data":"1abc59d24561abccf1767872a358579cf873d1091e5836e14169cd76c0cd3aab"} Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.007151 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dxv2k" event={"ID":"cd0aa104-48a4-4eab-afcc-2ef03d860551","Type":"ContainerStarted","Data":"d284df73b7fd149e40dd2e61a4921f972d1ee1af66e5595a151269eb977744e4"} Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.007174 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dxv2k" event={"ID":"cd0aa104-48a4-4eab-afcc-2ef03d860551","Type":"ContainerStarted","Data":"cb36a9d1535f5681537b95b7ba3c55273a5ae88f25755d1811dccbc09984b358"} Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.037173 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-dxv2k" podStartSLOduration=2.03714811 podStartE2EDuration="2.03714811s" podCreationTimestamp="2026-01-24 07:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:23.024405531 +0000 UTC m=+1324.320510754" watchObservedRunningTime="2026-01-24 07:15:23.03714811 +0000 UTC m=+1324.333253343" Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.391881 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.481759 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9mbx2"] Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.482169 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerName="dnsmasq-dns" containerID="cri-o://c1ed323221939791011d988310c5e1001dcc2cf9dcc422d083610000da9a42e7" gracePeriod=10 Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.020948 4675 generic.go:334] "Generic (PLEG): container finished" podID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerID="c1ed323221939791011d988310c5e1001dcc2cf9dcc422d083610000da9a42e7" exitCode=0 Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.021072 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" event={"ID":"a9bf7666-9ba5-43db-a358-1a2df0e0b118","Type":"ContainerDied","Data":"c1ed323221939791011d988310c5e1001dcc2cf9dcc422d083610000da9a42e7"} Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.021278 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" event={"ID":"a9bf7666-9ba5-43db-a358-1a2df0e0b118","Type":"ContainerDied","Data":"0f3336fcb7e0683104f9711241a9a218a04294fb74555900473d7d223d3a17cb"} Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.021298 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f3336fcb7e0683104f9711241a9a218a04294fb74555900473d7d223d3a17cb" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.048214 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.138617 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-config\") pod \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.138737 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmmvc\" (UniqueName: \"kubernetes.io/projected/a9bf7666-9ba5-43db-a358-1a2df0e0b118-kube-api-access-vmmvc\") pod \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.138759 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-sb\") pod \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.138789 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-svc\") pod \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.138814 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-nb\") pod \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.138866 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-swift-storage-0\") pod \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.152296 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9bf7666-9ba5-43db-a358-1a2df0e0b118-kube-api-access-vmmvc" (OuterVolumeSpecName: "kube-api-access-vmmvc") pod "a9bf7666-9ba5-43db-a358-1a2df0e0b118" (UID: "a9bf7666-9ba5-43db-a358-1a2df0e0b118"). InnerVolumeSpecName "kube-api-access-vmmvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.210003 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a9bf7666-9ba5-43db-a358-1a2df0e0b118" (UID: "a9bf7666-9ba5-43db-a358-1a2df0e0b118"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.212017 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9bf7666-9ba5-43db-a358-1a2df0e0b118" (UID: "a9bf7666-9ba5-43db-a358-1a2df0e0b118"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.224278 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-config" (OuterVolumeSpecName: "config") pod "a9bf7666-9ba5-43db-a358-1a2df0e0b118" (UID: "a9bf7666-9ba5-43db-a358-1a2df0e0b118"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.225343 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9bf7666-9ba5-43db-a358-1a2df0e0b118" (UID: "a9bf7666-9ba5-43db-a358-1a2df0e0b118"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.241734 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.241774 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmmvc\" (UniqueName: \"kubernetes.io/projected/a9bf7666-9ba5-43db-a358-1a2df0e0b118-kube-api-access-vmmvc\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.241786 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.241801 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.241812 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.258741 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9bf7666-9ba5-43db-a358-1a2df0e0b118" (UID: "a9bf7666-9ba5-43db-a358-1a2df0e0b118"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.343409 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:25 crc kubenswrapper[4675]: I0124 07:15:25.030037 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:15:25 crc kubenswrapper[4675]: I0124 07:15:25.066607 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9mbx2"] Jan 24 07:15:25 crc kubenswrapper[4675]: I0124 07:15:25.077011 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9mbx2"] Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.042545 4675 generic.go:334] "Generic (PLEG): container finished" podID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerID="74fe895fd181a36cae7a5cb232607ff9ad42b50773c595b8edf5ca1fd42d6a6c" exitCode=0 Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.042586 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerDied","Data":"74fe895fd181a36cae7a5cb232607ff9ad42b50773c595b8edf5ca1fd42d6a6c"} Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.317994 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.382936 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-combined-ca-bundle\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383042 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-scripts\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383093 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-log-httpd\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383117 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sbn5\" (UniqueName: \"kubernetes.io/projected/fe8d76d6-1c10-4489-8dd7-913259f97b21-kube-api-access-8sbn5\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383161 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-run-httpd\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383180 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-config-data\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383276 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-sg-core-conf-yaml\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383340 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-ceilometer-tls-certs\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.385385 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.387487 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.391428 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-scripts" (OuterVolumeSpecName: "scripts") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.408590 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8d76d6-1c10-4489-8dd7-913259f97b21-kube-api-access-8sbn5" (OuterVolumeSpecName: "kube-api-access-8sbn5") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "kube-api-access-8sbn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.444496 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.470452 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.485981 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.486031 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sbn5\" (UniqueName: \"kubernetes.io/projected/fe8d76d6-1c10-4489-8dd7-913259f97b21-kube-api-access-8sbn5\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.486044 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.486098 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.486112 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.486126 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.500902 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.512071 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-config-data" (OuterVolumeSpecName: "config-data") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.587171 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.587204 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.954471 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" path="/var/lib/kubelet/pods/a9bf7666-9ba5-43db-a358-1a2df0e0b118/volumes" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.055121 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerDied","Data":"184d1377ee27c4e0aa9c78bfe778ae0e24d8f487e027e7a4ab2ff93d1556f7a5"} Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.055195 4675 scope.go:117] "RemoveContainer" containerID="e22826235f6c0a5f7fcb14945a7c72821a350b6b9d34c9fed1ac60a55ae29e56" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.055285 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.089051 4675 scope.go:117] "RemoveContainer" containerID="bda3449e20a1024a10e072c7fb1aeedc4fbba8ed22e5ecb699bd653d3b26dffc" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.126405 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.139786 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150223 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:27 crc kubenswrapper[4675]: E0124 07:15:27.150774 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-notification-agent" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150801 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-notification-agent" Jan 24 07:15:27 crc kubenswrapper[4675]: E0124 07:15:27.150823 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-central-agent" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150832 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-central-agent" Jan 24 07:15:27 crc kubenswrapper[4675]: E0124 07:15:27.150848 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="sg-core" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150883 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="sg-core" Jan 24 07:15:27 crc kubenswrapper[4675]: E0124 07:15:27.150893 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerName="init" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150901 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerName="init" Jan 24 07:15:27 crc kubenswrapper[4675]: E0124 07:15:27.150914 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="proxy-httpd" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150923 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="proxy-httpd" Jan 24 07:15:27 crc kubenswrapper[4675]: E0124 07:15:27.150939 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerName="dnsmasq-dns" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150949 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerName="dnsmasq-dns" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.151178 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="sg-core" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.151197 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerName="dnsmasq-dns" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.151213 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-central-agent" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.151229 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-notification-agent" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.151246 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="proxy-httpd" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.155492 4675 scope.go:117] "RemoveContainer" containerID="1abc59d24561abccf1767872a358579cf873d1091e5836e14169cd76c0cd3aab" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.165363 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.170294 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.170412 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.170584 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.176551 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.194777 4675 scope.go:117] "RemoveContainer" containerID="74fe895fd181a36cae7a5cb232607ff9ad42b50773c595b8edf5ca1fd42d6a6c" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304186 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed571c62-3ced-4952-a932-37a5a84da52f-log-httpd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304243 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed571c62-3ced-4952-a932-37a5a84da52f-run-httpd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304294 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9kkd\" (UniqueName: \"kubernetes.io/projected/ed571c62-3ced-4952-a932-37a5a84da52f-kube-api-access-l9kkd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304345 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304527 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304571 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304641 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-scripts\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304663 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-config-data\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406302 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-scripts\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406657 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-config-data\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406803 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed571c62-3ced-4952-a932-37a5a84da52f-log-httpd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406828 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed571c62-3ced-4952-a932-37a5a84da52f-run-httpd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406863 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9kkd\" (UniqueName: \"kubernetes.io/projected/ed571c62-3ced-4952-a932-37a5a84da52f-kube-api-access-l9kkd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406903 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406948 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406973 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.408077 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed571c62-3ced-4952-a932-37a5a84da52f-run-httpd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.408492 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed571c62-3ced-4952-a932-37a5a84da52f-log-httpd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.412914 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-scripts\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.413117 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.421691 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-config-data\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.423098 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.423514 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.439228 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9kkd\" (UniqueName: \"kubernetes.io/projected/ed571c62-3ced-4952-a932-37a5a84da52f-kube-api-access-l9kkd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.485044 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.933537 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:28 crc kubenswrapper[4675]: I0124 07:15:28.063415 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed571c62-3ced-4952-a932-37a5a84da52f","Type":"ContainerStarted","Data":"5054ee53168c2196b91018ade5a9641f6cc03ad48c115ecdacfa04ce81b91961"} Jan 24 07:15:28 crc kubenswrapper[4675]: I0124 07:15:28.066584 4675 generic.go:334] "Generic (PLEG): container finished" podID="cd0aa104-48a4-4eab-afcc-2ef03d860551" containerID="d284df73b7fd149e40dd2e61a4921f972d1ee1af66e5595a151269eb977744e4" exitCode=0 Jan 24 07:15:28 crc kubenswrapper[4675]: I0124 07:15:28.066622 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dxv2k" event={"ID":"cd0aa104-48a4-4eab-afcc-2ef03d860551","Type":"ContainerDied","Data":"d284df73b7fd149e40dd2e61a4921f972d1ee1af66e5595a151269eb977744e4"} Jan 24 07:15:28 crc kubenswrapper[4675]: I0124 07:15:28.955542 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" path="/var/lib/kubelet/pods/fe8d76d6-1c10-4489-8dd7-913259f97b21/volumes" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.076701 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed571c62-3ced-4952-a932-37a5a84da52f","Type":"ContainerStarted","Data":"cf75ef15edf87dd90a39b57b53a81a05a390853dde0b6283a4e627d1e08f0a2b"} Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.552741 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.747562 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-scripts\") pod \"cd0aa104-48a4-4eab-afcc-2ef03d860551\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.748060 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-combined-ca-bundle\") pod \"cd0aa104-48a4-4eab-afcc-2ef03d860551\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.748140 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkf5n\" (UniqueName: \"kubernetes.io/projected/cd0aa104-48a4-4eab-afcc-2ef03d860551-kube-api-access-pkf5n\") pod \"cd0aa104-48a4-4eab-afcc-2ef03d860551\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.748171 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-config-data\") pod \"cd0aa104-48a4-4eab-afcc-2ef03d860551\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.753397 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-scripts" (OuterVolumeSpecName: "scripts") pod "cd0aa104-48a4-4eab-afcc-2ef03d860551" (UID: "cd0aa104-48a4-4eab-afcc-2ef03d860551"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.753950 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0aa104-48a4-4eab-afcc-2ef03d860551-kube-api-access-pkf5n" (OuterVolumeSpecName: "kube-api-access-pkf5n") pod "cd0aa104-48a4-4eab-afcc-2ef03d860551" (UID: "cd0aa104-48a4-4eab-afcc-2ef03d860551"). InnerVolumeSpecName "kube-api-access-pkf5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.776247 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-config-data" (OuterVolumeSpecName: "config-data") pod "cd0aa104-48a4-4eab-afcc-2ef03d860551" (UID: "cd0aa104-48a4-4eab-afcc-2ef03d860551"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.779484 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd0aa104-48a4-4eab-afcc-2ef03d860551" (UID: "cd0aa104-48a4-4eab-afcc-2ef03d860551"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.849916 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.849942 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.849955 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkf5n\" (UniqueName: \"kubernetes.io/projected/cd0aa104-48a4-4eab-afcc-2ef03d860551-kube-api-access-pkf5n\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.849964 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.086133 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dxv2k" event={"ID":"cd0aa104-48a4-4eab-afcc-2ef03d860551","Type":"ContainerDied","Data":"cb36a9d1535f5681537b95b7ba3c55273a5ae88f25755d1811dccbc09984b358"} Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.086174 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb36a9d1535f5681537b95b7ba3c55273a5ae88f25755d1811dccbc09984b358" Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.086236 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.092239 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed571c62-3ced-4952-a932-37a5a84da52f","Type":"ContainerStarted","Data":"e19f9d1428d421170057abb69b4e4f629df4265a476a0339f809f9fcae412d46"} Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.092276 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed571c62-3ced-4952-a932-37a5a84da52f","Type":"ContainerStarted","Data":"430402ce13c206a02b019a617bebe3e98cc7555beb74bf43df44b234e24f8704"} Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.312506 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.313330 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-api" containerID="cri-o://c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722" gracePeriod=30 Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.313655 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-log" containerID="cri-o://f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc" gracePeriod=30 Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.331230 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.331443 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="39c6c830-f77b-47f7-a874-02324d6c8c39" containerName="nova-scheduler-scheduler" containerID="cri-o://b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188" gracePeriod=30 Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.365402 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.365605 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-log" containerID="cri-o://f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211" gracePeriod=30 Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.365757 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-metadata" containerID="cri-o://0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204" gracePeriod=30 Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.854520 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.969947 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpp7j\" (UniqueName: \"kubernetes.io/projected/ce57208f-54cc-491b-b898-ba4fddd26d3c-kube-api-access-wpp7j\") pod \"ce57208f-54cc-491b-b898-ba4fddd26d3c\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.970023 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-internal-tls-certs\") pod \"ce57208f-54cc-491b-b898-ba4fddd26d3c\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.970085 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-public-tls-certs\") pod \"ce57208f-54cc-491b-b898-ba4fddd26d3c\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.970115 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-combined-ca-bundle\") pod \"ce57208f-54cc-491b-b898-ba4fddd26d3c\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.970135 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-config-data\") pod \"ce57208f-54cc-491b-b898-ba4fddd26d3c\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.970215 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce57208f-54cc-491b-b898-ba4fddd26d3c-logs\") pod \"ce57208f-54cc-491b-b898-ba4fddd26d3c\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.971772 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce57208f-54cc-491b-b898-ba4fddd26d3c-logs" (OuterVolumeSpecName: "logs") pod "ce57208f-54cc-491b-b898-ba4fddd26d3c" (UID: "ce57208f-54cc-491b-b898-ba4fddd26d3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.976748 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce57208f-54cc-491b-b898-ba4fddd26d3c-kube-api-access-wpp7j" (OuterVolumeSpecName: "kube-api-access-wpp7j") pod "ce57208f-54cc-491b-b898-ba4fddd26d3c" (UID: "ce57208f-54cc-491b-b898-ba4fddd26d3c"). InnerVolumeSpecName "kube-api-access-wpp7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.000012 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce57208f-54cc-491b-b898-ba4fddd26d3c" (UID: "ce57208f-54cc-491b-b898-ba4fddd26d3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.011096 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-config-data" (OuterVolumeSpecName: "config-data") pod "ce57208f-54cc-491b-b898-ba4fddd26d3c" (UID: "ce57208f-54cc-491b-b898-ba4fddd26d3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.022823 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ce57208f-54cc-491b-b898-ba4fddd26d3c" (UID: "ce57208f-54cc-491b-b898-ba4fddd26d3c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.043844 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ce57208f-54cc-491b-b898-ba4fddd26d3c" (UID: "ce57208f-54cc-491b-b898-ba4fddd26d3c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.071975 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpp7j\" (UniqueName: \"kubernetes.io/projected/ce57208f-54cc-491b-b898-ba4fddd26d3c-kube-api-access-wpp7j\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.072028 4675 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.072039 4675 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.072048 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.072058 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.072066 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce57208f-54cc-491b-b898-ba4fddd26d3c-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.103882 4675 generic.go:334] "Generic (PLEG): container finished" podID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerID="c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722" exitCode=0 Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.105099 4675 generic.go:334] "Generic (PLEG): container finished" podID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerID="f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc" exitCode=143 Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.103939 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce57208f-54cc-491b-b898-ba4fddd26d3c","Type":"ContainerDied","Data":"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722"} Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.103923 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.105342 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce57208f-54cc-491b-b898-ba4fddd26d3c","Type":"ContainerDied","Data":"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc"} Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.105370 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce57208f-54cc-491b-b898-ba4fddd26d3c","Type":"ContainerDied","Data":"15e1570d93e869aa39d375fd2b3930f128f62e919f69a8e3b2a153bcd883ebee"} Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.105386 4675 scope.go:117] "RemoveContainer" containerID="c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.107665 4675 generic.go:334] "Generic (PLEG): container finished" podID="bc3534ca-1196-47a7-889c-cead596f7636" containerID="f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211" exitCode=143 Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.107709 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc3534ca-1196-47a7-889c-cead596f7636","Type":"ContainerDied","Data":"f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211"} Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.132585 4675 scope.go:117] "RemoveContainer" containerID="f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.142702 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.154094 4675 scope.go:117] "RemoveContainer" containerID="c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722" Jan 24 07:15:31 crc kubenswrapper[4675]: E0124 07:15:31.155783 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722\": container with ID starting with c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722 not found: ID does not exist" containerID="c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.155831 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722"} err="failed to get container status \"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722\": rpc error: code = NotFound desc = could not find container \"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722\": container with ID starting with c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722 not found: ID does not exist" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.155877 4675 scope.go:117] "RemoveContainer" containerID="f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc" Jan 24 07:15:31 crc kubenswrapper[4675]: E0124 07:15:31.156181 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc\": container with ID starting with f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc not found: ID does not exist" containerID="f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.156252 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc"} err="failed to get container status \"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc\": rpc error: code = NotFound desc = could not find container \"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc\": container with ID starting with f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc not found: ID does not exist" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.156272 4675 scope.go:117] "RemoveContainer" containerID="c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.159183 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722"} err="failed to get container status \"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722\": rpc error: code = NotFound desc = could not find container \"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722\": container with ID starting with c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722 not found: ID does not exist" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.159262 4675 scope.go:117] "RemoveContainer" containerID="f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.159500 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.159561 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc"} err="failed to get container status \"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc\": rpc error: code = NotFound desc = could not find container \"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc\": container with ID starting with f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc not found: ID does not exist" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.184547 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:31 crc kubenswrapper[4675]: E0124 07:15:31.185027 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0aa104-48a4-4eab-afcc-2ef03d860551" containerName="nova-manage" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.185042 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0aa104-48a4-4eab-afcc-2ef03d860551" containerName="nova-manage" Jan 24 07:15:31 crc kubenswrapper[4675]: E0124 07:15:31.185056 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-log" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.185063 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-log" Jan 24 07:15:31 crc kubenswrapper[4675]: E0124 07:15:31.185083 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-api" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.185089 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-api" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.185279 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-log" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.185301 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0aa104-48a4-4eab-afcc-2ef03d860551" containerName="nova-manage" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.185313 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-api" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.186278 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.189869 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.191524 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.191734 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.217070 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.275816 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.275892 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a0d5c5-541f-4a43-9d20-22264dca21d1-logs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.276023 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg7d8\" (UniqueName: \"kubernetes.io/projected/95a0d5c5-541f-4a43-9d20-22264dca21d1-kube-api-access-dg7d8\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.276100 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.276327 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-public-tls-certs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.276467 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-config-data\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.379473 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-public-tls-certs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.380532 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-config-data\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.381015 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.381189 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a0d5c5-541f-4a43-9d20-22264dca21d1-logs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.381265 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg7d8\" (UniqueName: \"kubernetes.io/projected/95a0d5c5-541f-4a43-9d20-22264dca21d1-kube-api-access-dg7d8\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.381348 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.381661 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a0d5c5-541f-4a43-9d20-22264dca21d1-logs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.383905 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-config-data\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.391162 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.393159 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-public-tls-certs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.401364 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg7d8\" (UniqueName: \"kubernetes.io/projected/95a0d5c5-541f-4a43-9d20-22264dca21d1-kube-api-access-dg7d8\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.403435 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.587817 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.093296 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:32 crc kubenswrapper[4675]: W0124 07:15:32.101161 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95a0d5c5_541f_4a43_9d20_22264dca21d1.slice/crio-dd306f159bfdd8ea1382e542f82e5a4fc20851254c43893bfb99701398e66922 WatchSource:0}: Error finding container dd306f159bfdd8ea1382e542f82e5a4fc20851254c43893bfb99701398e66922: Status 404 returned error can't find the container with id dd306f159bfdd8ea1382e542f82e5a4fc20851254c43893bfb99701398e66922 Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.121771 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed571c62-3ced-4952-a932-37a5a84da52f","Type":"ContainerStarted","Data":"3fe57003c181a071a09a4bb572dd905afb384399d75bdada8cd28f68e2743a29"} Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.122777 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.123406 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"95a0d5c5-541f-4a43-9d20-22264dca21d1","Type":"ContainerStarted","Data":"dd306f159bfdd8ea1382e542f82e5a4fc20851254c43893bfb99701398e66922"} Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.139569 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7646933809999998 podStartE2EDuration="5.13955107s" podCreationTimestamp="2026-01-24 07:15:27 +0000 UTC" firstStartedPulling="2026-01-24 07:15:27.953340096 +0000 UTC m=+1329.249445319" lastFinishedPulling="2026-01-24 07:15:31.328197785 +0000 UTC m=+1332.624303008" observedRunningTime="2026-01-24 07:15:32.137229574 +0000 UTC m=+1333.433334797" watchObservedRunningTime="2026-01-24 07:15:32.13955107 +0000 UTC m=+1333.435656293" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.631634 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.815238 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb9jn\" (UniqueName: \"kubernetes.io/projected/39c6c830-f77b-47f7-a874-02324d6c8c39-kube-api-access-hb9jn\") pod \"39c6c830-f77b-47f7-a874-02324d6c8c39\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.815355 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-config-data\") pod \"39c6c830-f77b-47f7-a874-02324d6c8c39\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.815571 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-combined-ca-bundle\") pod \"39c6c830-f77b-47f7-a874-02324d6c8c39\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.820965 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c6c830-f77b-47f7-a874-02324d6c8c39-kube-api-access-hb9jn" (OuterVolumeSpecName: "kube-api-access-hb9jn") pod "39c6c830-f77b-47f7-a874-02324d6c8c39" (UID: "39c6c830-f77b-47f7-a874-02324d6c8c39"). InnerVolumeSpecName "kube-api-access-hb9jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.852489 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39c6c830-f77b-47f7-a874-02324d6c8c39" (UID: "39c6c830-f77b-47f7-a874-02324d6c8c39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.853129 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-config-data" (OuterVolumeSpecName: "config-data") pod "39c6c830-f77b-47f7-a874-02324d6c8c39" (UID: "39c6c830-f77b-47f7-a874-02324d6c8c39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.917558 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.917593 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb9jn\" (UniqueName: \"kubernetes.io/projected/39c6c830-f77b-47f7-a874-02324d6c8c39-kube-api-access-hb9jn\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.917603 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.953283 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" path="/var/lib/kubelet/pods/ce57208f-54cc-491b-b898-ba4fddd26d3c/volumes" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.139577 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"95a0d5c5-541f-4a43-9d20-22264dca21d1","Type":"ContainerStarted","Data":"646875f2527b96c84e70c52ec9ae92ee249a13a5911412bb88ba4b7d05634e11"} Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.139621 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"95a0d5c5-541f-4a43-9d20-22264dca21d1","Type":"ContainerStarted","Data":"46150f884abff5f9f82b371b50a0ee90cfd445db579317c3a86904ce248eb317"} Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.145443 4675 generic.go:334] "Generic (PLEG): container finished" podID="39c6c830-f77b-47f7-a874-02324d6c8c39" containerID="b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188" exitCode=0 Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.146687 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.147254 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39c6c830-f77b-47f7-a874-02324d6c8c39","Type":"ContainerDied","Data":"b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188"} Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.147292 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39c6c830-f77b-47f7-a874-02324d6c8c39","Type":"ContainerDied","Data":"e5cc1510bbf0d31557f24e623b20f8c07ccabe6a37acaaa3850bbc9a59202c9b"} Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.147311 4675 scope.go:117] "RemoveContainer" containerID="b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.170576 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.170556639 podStartE2EDuration="2.170556639s" podCreationTimestamp="2026-01-24 07:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:33.163502238 +0000 UTC m=+1334.459607461" watchObservedRunningTime="2026-01-24 07:15:33.170556639 +0000 UTC m=+1334.466661862" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.193502 4675 scope.go:117] "RemoveContainer" containerID="b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.193681 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:15:33 crc kubenswrapper[4675]: E0124 07:15:33.194473 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188\": container with ID starting with b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188 not found: ID does not exist" containerID="b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.194511 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188"} err="failed to get container status \"b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188\": rpc error: code = NotFound desc = could not find container \"b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188\": container with ID starting with b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188 not found: ID does not exist" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.218442 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.247403 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:15:33 crc kubenswrapper[4675]: E0124 07:15:33.251042 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c6c830-f77b-47f7-a874-02324d6c8c39" containerName="nova-scheduler-scheduler" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.251068 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c6c830-f77b-47f7-a874-02324d6c8c39" containerName="nova-scheduler-scheduler" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.251782 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c6c830-f77b-47f7-a874-02324d6c8c39" containerName="nova-scheduler-scheduler" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.255379 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.258322 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.307057 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.346201 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361b5d16-2808-40ad-88a0-f07fd4c33e3e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.346318 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361b5d16-2808-40ad-88a0-f07fd4c33e3e-config-data\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.346346 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hssd\" (UniqueName: \"kubernetes.io/projected/361b5d16-2808-40ad-88a0-f07fd4c33e3e-kube-api-access-2hssd\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.448332 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361b5d16-2808-40ad-88a0-f07fd4c33e3e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.448442 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361b5d16-2808-40ad-88a0-f07fd4c33e3e-config-data\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.448466 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hssd\" (UniqueName: \"kubernetes.io/projected/361b5d16-2808-40ad-88a0-f07fd4c33e3e-kube-api-access-2hssd\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.454459 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361b5d16-2808-40ad-88a0-f07fd4c33e3e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.454481 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361b5d16-2808-40ad-88a0-f07fd4c33e3e-config-data\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.466325 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hssd\" (UniqueName: \"kubernetes.io/projected/361b5d16-2808-40ad-88a0-f07fd4c33e3e-kube-api-access-2hssd\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.521288 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:58304->10.217.0.197:8775: read: connection reset by peer" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.521334 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:58312->10.217.0.197:8775: read: connection reset by peer" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.602408 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.003556 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.066467 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-nova-metadata-tls-certs\") pod \"bc3534ca-1196-47a7-889c-cead596f7636\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.066549 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-combined-ca-bundle\") pod \"bc3534ca-1196-47a7-889c-cead596f7636\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.066647 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-config-data\") pod \"bc3534ca-1196-47a7-889c-cead596f7636\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.066708 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3534ca-1196-47a7-889c-cead596f7636-logs\") pod \"bc3534ca-1196-47a7-889c-cead596f7636\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.066809 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xhxb\" (UniqueName: \"kubernetes.io/projected/bc3534ca-1196-47a7-889c-cead596f7636-kube-api-access-9xhxb\") pod \"bc3534ca-1196-47a7-889c-cead596f7636\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.071811 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3534ca-1196-47a7-889c-cead596f7636-logs" (OuterVolumeSpecName: "logs") pod "bc3534ca-1196-47a7-889c-cead596f7636" (UID: "bc3534ca-1196-47a7-889c-cead596f7636"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.073504 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3534ca-1196-47a7-889c-cead596f7636-kube-api-access-9xhxb" (OuterVolumeSpecName: "kube-api-access-9xhxb") pod "bc3534ca-1196-47a7-889c-cead596f7636" (UID: "bc3534ca-1196-47a7-889c-cead596f7636"). InnerVolumeSpecName "kube-api-access-9xhxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.125216 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-config-data" (OuterVolumeSpecName: "config-data") pod "bc3534ca-1196-47a7-889c-cead596f7636" (UID: "bc3534ca-1196-47a7-889c-cead596f7636"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.141786 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc3534ca-1196-47a7-889c-cead596f7636" (UID: "bc3534ca-1196-47a7-889c-cead596f7636"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.161035 4675 generic.go:334] "Generic (PLEG): container finished" podID="bc3534ca-1196-47a7-889c-cead596f7636" containerID="0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204" exitCode=0 Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.161146 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc3534ca-1196-47a7-889c-cead596f7636","Type":"ContainerDied","Data":"0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204"} Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.161173 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.161284 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc3534ca-1196-47a7-889c-cead596f7636","Type":"ContainerDied","Data":"c3d7107344b9836921068d336a5221f9cb286c5dd329b4c60c4a920f0a79e048"} Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.162067 4675 scope.go:117] "RemoveContainer" containerID="0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.168510 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.168545 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3534ca-1196-47a7-889c-cead596f7636-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.168558 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xhxb\" (UniqueName: \"kubernetes.io/projected/bc3534ca-1196-47a7-889c-cead596f7636-kube-api-access-9xhxb\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.168572 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.169051 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bc3534ca-1196-47a7-889c-cead596f7636" (UID: "bc3534ca-1196-47a7-889c-cead596f7636"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.199260 4675 scope.go:117] "RemoveContainer" containerID="f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211" Jan 24 07:15:34 crc kubenswrapper[4675]: W0124 07:15:34.209555 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod361b5d16_2808_40ad_88a0_f07fd4c33e3e.slice/crio-56ffb3002ec415f142d1f9b7d97ac57a7f8d93662e701539a42de63222606d9d WatchSource:0}: Error finding container 56ffb3002ec415f142d1f9b7d97ac57a7f8d93662e701539a42de63222606d9d: Status 404 returned error can't find the container with id 56ffb3002ec415f142d1f9b7d97ac57a7f8d93662e701539a42de63222606d9d Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.209957 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.270179 4675 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.360660 4675 scope.go:117] "RemoveContainer" containerID="0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204" Jan 24 07:15:34 crc kubenswrapper[4675]: E0124 07:15:34.361096 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204\": container with ID starting with 0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204 not found: ID does not exist" containerID="0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.361130 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204"} err="failed to get container status \"0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204\": rpc error: code = NotFound desc = could not find container \"0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204\": container with ID starting with 0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204 not found: ID does not exist" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.361157 4675 scope.go:117] "RemoveContainer" containerID="f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211" Jan 24 07:15:34 crc kubenswrapper[4675]: E0124 07:15:34.361502 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211\": container with ID starting with f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211 not found: ID does not exist" containerID="f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.361528 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211"} err="failed to get container status \"f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211\": rpc error: code = NotFound desc = could not find container \"f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211\": container with ID starting with f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211 not found: ID does not exist" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.494589 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.508431 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.516387 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:15:34 crc kubenswrapper[4675]: E0124 07:15:34.516774 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-log" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.516789 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-log" Jan 24 07:15:34 crc kubenswrapper[4675]: E0124 07:15:34.516802 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-metadata" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.516809 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-metadata" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.516980 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-log" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.517005 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-metadata" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.517872 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.519746 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.522781 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.534893 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.575683 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.575737 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-config-data\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.575795 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.575845 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgb8k\" (UniqueName: \"kubernetes.io/projected/d55e1385-c016-4bb9-afc2-a070f5a88241-kube-api-access-vgb8k\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.575876 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d55e1385-c016-4bb9-afc2-a070f5a88241-logs\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.678053 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.678299 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-config-data\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.678443 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.678565 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgb8k\" (UniqueName: \"kubernetes.io/projected/d55e1385-c016-4bb9-afc2-a070f5a88241-kube-api-access-vgb8k\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.678669 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d55e1385-c016-4bb9-afc2-a070f5a88241-logs\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.679147 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d55e1385-c016-4bb9-afc2-a070f5a88241-logs\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.682919 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.685238 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-config-data\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.688489 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.695279 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgb8k\" (UniqueName: \"kubernetes.io/projected/d55e1385-c016-4bb9-afc2-a070f5a88241-kube-api-access-vgb8k\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.833960 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.963702 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c6c830-f77b-47f7-a874-02324d6c8c39" path="/var/lib/kubelet/pods/39c6c830-f77b-47f7-a874-02324d6c8c39/volumes" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.965792 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc3534ca-1196-47a7-889c-cead596f7636" path="/var/lib/kubelet/pods/bc3534ca-1196-47a7-889c-cead596f7636/volumes" Jan 24 07:15:35 crc kubenswrapper[4675]: I0124 07:15:35.187153 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"361b5d16-2808-40ad-88a0-f07fd4c33e3e","Type":"ContainerStarted","Data":"bd3399e6747c252d7babb83956312d2708d980c161f2022e34c6f7fe7deb36da"} Jan 24 07:15:35 crc kubenswrapper[4675]: I0124 07:15:35.187195 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"361b5d16-2808-40ad-88a0-f07fd4c33e3e","Type":"ContainerStarted","Data":"56ffb3002ec415f142d1f9b7d97ac57a7f8d93662e701539a42de63222606d9d"} Jan 24 07:15:35 crc kubenswrapper[4675]: I0124 07:15:35.227151 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.227130185 podStartE2EDuration="2.227130185s" podCreationTimestamp="2026-01-24 07:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:35.215217937 +0000 UTC m=+1336.511323160" watchObservedRunningTime="2026-01-24 07:15:35.227130185 +0000 UTC m=+1336.523235418" Jan 24 07:15:35 crc kubenswrapper[4675]: I0124 07:15:35.295841 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:15:35 crc kubenswrapper[4675]: W0124 07:15:35.295967 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd55e1385_c016_4bb9_afc2_a070f5a88241.slice/crio-680e1a12e6f26b94832804282a2c76f2160ef5e024d392248cad3ba280beded3 WatchSource:0}: Error finding container 680e1a12e6f26b94832804282a2c76f2160ef5e024d392248cad3ba280beded3: Status 404 returned error can't find the container with id 680e1a12e6f26b94832804282a2c76f2160ef5e024d392248cad3ba280beded3 Jan 24 07:15:36 crc kubenswrapper[4675]: I0124 07:15:36.205138 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d55e1385-c016-4bb9-afc2-a070f5a88241","Type":"ContainerStarted","Data":"0800c0553de08ed7748a32fa828ba7b206d1eadc16175e7e9248f918dd2247b1"} Jan 24 07:15:36 crc kubenswrapper[4675]: I0124 07:15:36.205482 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d55e1385-c016-4bb9-afc2-a070f5a88241","Type":"ContainerStarted","Data":"bc941574cbce074f050c4711401dadee97817cd498ae5f78af3f88de274a2d67"} Jan 24 07:15:36 crc kubenswrapper[4675]: I0124 07:15:36.205497 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d55e1385-c016-4bb9-afc2-a070f5a88241","Type":"ContainerStarted","Data":"680e1a12e6f26b94832804282a2c76f2160ef5e024d392248cad3ba280beded3"} Jan 24 07:15:36 crc kubenswrapper[4675]: I0124 07:15:36.232055 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.232034042 podStartE2EDuration="2.232034042s" podCreationTimestamp="2026-01-24 07:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:36.228590859 +0000 UTC m=+1337.524696112" watchObservedRunningTime="2026-01-24 07:15:36.232034042 +0000 UTC m=+1337.528139265" Jan 24 07:15:38 crc kubenswrapper[4675]: I0124 07:15:38.603553 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 24 07:15:38 crc kubenswrapper[4675]: I0124 07:15:38.629775 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:15:38 crc kubenswrapper[4675]: I0124 07:15:38.629854 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:15:39 crc kubenswrapper[4675]: I0124 07:15:39.835025 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 07:15:39 crc kubenswrapper[4675]: I0124 07:15:39.836162 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 07:15:41 crc kubenswrapper[4675]: I0124 07:15:41.588890 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 07:15:41 crc kubenswrapper[4675]: I0124 07:15:41.589655 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 07:15:42 crc kubenswrapper[4675]: I0124 07:15:42.601867 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="95a0d5c5-541f-4a43-9d20-22264dca21d1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:15:42 crc kubenswrapper[4675]: I0124 07:15:42.601923 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="95a0d5c5-541f-4a43-9d20-22264dca21d1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:15:43 crc kubenswrapper[4675]: I0124 07:15:43.603564 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 24 07:15:43 crc kubenswrapper[4675]: I0124 07:15:43.642292 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 24 07:15:44 crc kubenswrapper[4675]: I0124 07:15:44.328657 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 24 07:15:44 crc kubenswrapper[4675]: I0124 07:15:44.834769 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 07:15:44 crc kubenswrapper[4675]: I0124 07:15:44.834811 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 07:15:45 crc kubenswrapper[4675]: I0124 07:15:45.847957 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d55e1385-c016-4bb9-afc2-a070f5a88241" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:15:45 crc kubenswrapper[4675]: I0124 07:15:45.847965 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d55e1385-c016-4bb9-afc2-a070f5a88241" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:15:51 crc kubenswrapper[4675]: I0124 07:15:51.595376 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 07:15:51 crc kubenswrapper[4675]: I0124 07:15:51.596336 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 07:15:51 crc kubenswrapper[4675]: I0124 07:15:51.596487 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 07:15:51 crc kubenswrapper[4675]: I0124 07:15:51.600900 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 07:15:52 crc kubenswrapper[4675]: I0124 07:15:52.367244 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 07:15:52 crc kubenswrapper[4675]: I0124 07:15:52.375692 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 07:15:54 crc kubenswrapper[4675]: I0124 07:15:54.841757 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 07:15:54 crc kubenswrapper[4675]: I0124 07:15:54.844596 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 07:15:54 crc kubenswrapper[4675]: I0124 07:15:54.849986 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 07:15:55 crc kubenswrapper[4675]: I0124 07:15:55.404680 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 07:15:57 crc kubenswrapper[4675]: I0124 07:15:57.495120 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 24 07:16:07 crc kubenswrapper[4675]: I0124 07:16:07.888764 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:16:08 crc kubenswrapper[4675]: I0124 07:16:08.629845 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:16:08 crc kubenswrapper[4675]: I0124 07:16:08.630215 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:16:08 crc kubenswrapper[4675]: I0124 07:16:08.630264 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:16:08 crc kubenswrapper[4675]: I0124 07:16:08.631682 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c57b46ad673cdfd63921bb6948675e30fb84216d961ca2e82415fb89b85b5df0"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:16:08 crc kubenswrapper[4675]: I0124 07:16:08.631771 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://c57b46ad673cdfd63921bb6948675e30fb84216d961ca2e82415fb89b85b5df0" gracePeriod=600 Jan 24 07:16:09 crc kubenswrapper[4675]: I0124 07:16:09.521638 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="c57b46ad673cdfd63921bb6948675e30fb84216d961ca2e82415fb89b85b5df0" exitCode=0 Jan 24 07:16:09 crc kubenswrapper[4675]: I0124 07:16:09.521970 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"c57b46ad673cdfd63921bb6948675e30fb84216d961ca2e82415fb89b85b5df0"} Jan 24 07:16:09 crc kubenswrapper[4675]: I0124 07:16:09.522117 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38"} Jan 24 07:16:09 crc kubenswrapper[4675]: I0124 07:16:09.522139 4675 scope.go:117] "RemoveContainer" containerID="ccc264da54b5f1cadbac5cdeddfb0468de5e9dc08fb8953998ed833d79a9f49c" Jan 24 07:16:09 crc kubenswrapper[4675]: I0124 07:16:09.599524 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:16:12 crc kubenswrapper[4675]: I0124 07:16:12.592508 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="rabbitmq" containerID="cri-o://0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075" gracePeriod=604796 Jan 24 07:16:14 crc kubenswrapper[4675]: I0124 07:16:14.207540 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="rabbitmq" containerID="cri-o://8b9f86fab7a581af646c89e80c3c7ca0ce4c63bf71b2b12b42c289f8f5551668" gracePeriod=604796 Jan 24 07:16:15 crc kubenswrapper[4675]: I0124 07:16:15.132106 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 24 07:16:15 crc kubenswrapper[4675]: I0124 07:16:15.531757 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 24 07:16:19 crc kubenswrapper[4675]: E0124 07:16:19.266534 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-conmon-0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075.scope\": RecentStats: unable to find data in memory cache]" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.376548 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.420531 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-server-conf\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.420897 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-confd\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421015 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-config-data\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421169 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-pod-info\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421278 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qg6z\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-kube-api-access-2qg6z\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421385 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-erlang-cookie\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421488 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-tls\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421597 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421686 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-erlang-cookie-secret\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421829 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-plugins\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421963 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-plugins-conf\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.422556 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.423019 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.424040 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.437592 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-kube-api-access-2qg6z" (OuterVolumeSpecName: "kube-api-access-2qg6z") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "kube-api-access-2qg6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.438616 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.439654 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-pod-info" (OuterVolumeSpecName: "pod-info") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.446527 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.515587 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.545624 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.545664 4675 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.545686 4675 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.545701 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qg6z\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-kube-api-access-2qg6z\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.545738 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.545780 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.551845 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.648249 4675 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.655614 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-config-data" (OuterVolumeSpecName: "config-data") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.656524 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.730278 4675 generic.go:334] "Generic (PLEG): container finished" podID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerID="0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075" exitCode=0 Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.730340 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c","Type":"ContainerDied","Data":"0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075"} Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.730384 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c","Type":"ContainerDied","Data":"dc6f572e1e59798884630905a0aa55c9e501f7fef5df41864f737d4d70bd2321"} Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.730402 4675 scope.go:117] "RemoveContainer" containerID="0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.730563 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.738581 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-server-conf" (OuterVolumeSpecName: "server-conf") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.750468 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.750493 4675 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.750503 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.754953 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.849694 4675 scope.go:117] "RemoveContainer" containerID="3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.851566 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.887177 4675 scope.go:117] "RemoveContainer" containerID="0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075" Jan 24 07:16:19 crc kubenswrapper[4675]: E0124 07:16:19.888966 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075\": container with ID starting with 0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075 not found: ID does not exist" containerID="0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.889002 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075"} err="failed to get container status \"0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075\": rpc error: code = NotFound desc = could not find container \"0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075\": container with ID starting with 0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075 not found: ID does not exist" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.889023 4675 scope.go:117] "RemoveContainer" containerID="3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a" Jan 24 07:16:19 crc kubenswrapper[4675]: E0124 07:16:19.889403 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a\": container with ID starting with 3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a not found: ID does not exist" containerID="3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.889426 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a"} err="failed to get container status \"3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a\": rpc error: code = NotFound desc = could not find container \"3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a\": container with ID starting with 3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a not found: ID does not exist" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.105231 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.116454 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.152667 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:16:20 crc kubenswrapper[4675]: E0124 07:16:20.153054 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="setup-container" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.153070 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="setup-container" Jan 24 07:16:20 crc kubenswrapper[4675]: E0124 07:16:20.153100 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="rabbitmq" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.153110 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="rabbitmq" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.153291 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="rabbitmq" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.154261 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.156082 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.156602 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.156777 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.157787 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.157903 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.158325 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nnfwj" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.160352 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.189338 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.262918 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263001 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263024 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263047 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263140 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263292 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263406 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxb7s\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-kube-api-access-wxb7s\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263453 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3fd85775-321f-4647-95b6-773ec82811e0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263525 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3fd85775-321f-4647-95b6-773ec82811e0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263583 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263697 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-config-data\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.365037 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366003 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxb7s\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-kube-api-access-wxb7s\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.365439 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366045 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3fd85775-321f-4647-95b6-773ec82811e0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366076 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3fd85775-321f-4647-95b6-773ec82811e0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366101 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366137 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-config-data\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366168 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366214 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366231 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366254 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366272 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366649 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.367168 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-config-data\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.367206 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.368038 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.368216 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.371648 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.372563 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3fd85775-321f-4647-95b6-773ec82811e0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.373815 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3fd85775-321f-4647-95b6-773ec82811e0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.375436 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.386206 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxb7s\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-kube-api-access-wxb7s\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.419413 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.473447 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.768825 4675 generic.go:334] "Generic (PLEG): container finished" podID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerID="8b9f86fab7a581af646c89e80c3c7ca0ce4c63bf71b2b12b42c289f8f5551668" exitCode=0 Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.769189 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ed4c9b-a365-46aa-95d7-7be5d2cc354a","Type":"ContainerDied","Data":"8b9f86fab7a581af646c89e80c3c7ca0ce4c63bf71b2b12b42c289f8f5551668"} Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.803256 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.874560 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-plugins-conf\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875001 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-pod-info\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875030 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-config-data\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875155 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875198 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-tls\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875256 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chsxm\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-kube-api-access-chsxm\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875330 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-server-conf\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875393 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-erlang-cookie-secret\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875459 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-erlang-cookie\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875511 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-plugins\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875553 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-confd\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.877918 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.878199 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.878544 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.884768 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-pod-info" (OuterVolumeSpecName: "pod-info") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.884873 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-kube-api-access-chsxm" (OuterVolumeSpecName: "kube-api-access-chsxm") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "kube-api-access-chsxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.887272 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.889135 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.894358 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.935832 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-config-data" (OuterVolumeSpecName: "config-data") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.948018 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-server-conf" (OuterVolumeSpecName: "server-conf") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.963044 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" path="/var/lib/kubelet/pods/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c/volumes" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977550 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chsxm\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-kube-api-access-chsxm\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977576 4675 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977584 4675 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977593 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977601 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977609 4675 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977617 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977625 4675 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977653 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977662 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.995003 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.046736 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.082392 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.082421 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.089038 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.781409 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ed4c9b-a365-46aa-95d7-7be5d2cc354a","Type":"ContainerDied","Data":"e02cfc39376a20ed79af6aa4a70a95d12cb107645ef263fc4bfe2732893da583"} Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.781743 4675 scope.go:117] "RemoveContainer" containerID="8b9f86fab7a581af646c89e80c3c7ca0ce4c63bf71b2b12b42c289f8f5551668" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.781895 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.784754 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3fd85775-321f-4647-95b6-773ec82811e0","Type":"ContainerStarted","Data":"1c0f864913c69fa0df26d8f0e2555bf1eaee8f215d71bad716dd5266c7480278"} Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.819334 4675 scope.go:117] "RemoveContainer" containerID="78ce6643db3a1b1549c4015afb11eee3ac5a9eb412378d961f3105790aac9761" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.827457 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.839804 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.866618 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:16:21 crc kubenswrapper[4675]: E0124 07:16:21.867571 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="setup-container" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.867665 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="setup-container" Jan 24 07:16:21 crc kubenswrapper[4675]: E0124 07:16:21.867758 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="rabbitmq" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.867824 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="rabbitmq" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.868138 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="rabbitmq" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.885928 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.898323 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.898409 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.898544 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.898778 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.900379 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.900628 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bt874" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.901475 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.901670 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006493 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006548 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c146e5e-4709-4401-a5eb-522609573260-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006586 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006620 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006643 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqdtg\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-kube-api-access-xqdtg\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006682 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c146e5e-4709-4401-a5eb-522609573260-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006754 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006921 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006961 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.007003 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.007025 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108172 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108292 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108316 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108345 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108362 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108410 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108424 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c146e5e-4709-4401-a5eb-522609573260-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108446 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108463 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108479 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqdtg\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-kube-api-access-xqdtg\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108499 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c146e5e-4709-4401-a5eb-522609573260-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108703 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.109527 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.109870 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.110078 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.110290 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.111126 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.128756 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.155486 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.185179 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.185219 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c146e5e-4709-4401-a5eb-522609573260-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.186199 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c146e5e-4709-4401-a5eb-522609573260-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.186666 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqdtg\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-kube-api-access-xqdtg\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.250915 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.733862 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-9dtm6"] Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.735630 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.748998 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.783960 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-9dtm6"] Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.818948 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ztwb\" (UniqueName: \"kubernetes.io/projected/0ac6311b-068f-4d1a-9950-f6ad4143ec44-kube-api-access-9ztwb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.818992 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-config\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.819041 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.819078 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-svc\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.819118 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.819136 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.819161 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.843949 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3fd85775-321f-4647-95b6-773ec82811e0","Type":"ContainerStarted","Data":"75f405fdae86dd78c23a70324d2fe9b92658e5ef111d4ed788628deca09cdb34"} Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.920516 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.921634 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.921773 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-svc\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.921885 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.921963 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.922047 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.922214 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ztwb\" (UniqueName: \"kubernetes.io/projected/0ac6311b-068f-4d1a-9950-f6ad4143ec44-kube-api-access-9ztwb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.922307 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-config\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.923128 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-config\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.924262 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.925844 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.926443 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-svc\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.927114 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.927601 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.965076 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" path="/var/lib/kubelet/pods/50ed4c9b-a365-46aa-95d7-7be5d2cc354a/volumes" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.969497 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ztwb\" (UniqueName: \"kubernetes.io/projected/0ac6311b-068f-4d1a-9950-f6ad4143ec44-kube-api-access-9ztwb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:23 crc kubenswrapper[4675]: I0124 07:16:23.072029 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:24 crc kubenswrapper[4675]: I0124 07:16:23.531635 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-9dtm6"] Jan 24 07:16:24 crc kubenswrapper[4675]: W0124 07:16:23.537504 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ac6311b_068f_4d1a_9950_f6ad4143ec44.slice/crio-6cc48653ebd609e683557268147c418ae47794de31786e47b744c830eff868d6 WatchSource:0}: Error finding container 6cc48653ebd609e683557268147c418ae47794de31786e47b744c830eff868d6: Status 404 returned error can't find the container with id 6cc48653ebd609e683557268147c418ae47794de31786e47b744c830eff868d6 Jan 24 07:16:24 crc kubenswrapper[4675]: I0124 07:16:23.853109 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" event={"ID":"0ac6311b-068f-4d1a-9950-f6ad4143ec44","Type":"ContainerStarted","Data":"6cc48653ebd609e683557268147c418ae47794de31786e47b744c830eff868d6"} Jan 24 07:16:24 crc kubenswrapper[4675]: I0124 07:16:23.854044 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7c146e5e-4709-4401-a5eb-522609573260","Type":"ContainerStarted","Data":"297d18d0a45c3f1d41808bad90659c6819f8936b9b6cb24a60dda7a0a3cc9c86"} Jan 24 07:16:24 crc kubenswrapper[4675]: I0124 07:16:24.889124 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7c146e5e-4709-4401-a5eb-522609573260","Type":"ContainerStarted","Data":"4c881fc8ce20aba2f577450cba95424c89b08fbe29954e129e4e033f86adfdfd"} Jan 24 07:16:24 crc kubenswrapper[4675]: I0124 07:16:24.894462 4675 generic.go:334] "Generic (PLEG): container finished" podID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerID="35d6db473d64d478e0b32272afc04578101f4641988ab25761ac4fafc2485424" exitCode=0 Jan 24 07:16:24 crc kubenswrapper[4675]: I0124 07:16:24.896570 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" event={"ID":"0ac6311b-068f-4d1a-9950-f6ad4143ec44","Type":"ContainerDied","Data":"35d6db473d64d478e0b32272afc04578101f4641988ab25761ac4fafc2485424"} Jan 24 07:16:25 crc kubenswrapper[4675]: I0124 07:16:25.913026 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" event={"ID":"0ac6311b-068f-4d1a-9950-f6ad4143ec44","Type":"ContainerStarted","Data":"0e1889fd923193c5a66c5d433254b77fcdca08e4541adf5af5835fe87dde6d2b"} Jan 24 07:16:25 crc kubenswrapper[4675]: I0124 07:16:25.946338 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" podStartSLOduration=3.946319259 podStartE2EDuration="3.946319259s" podCreationTimestamp="2026-01-24 07:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:16:25.933642611 +0000 UTC m=+1387.229747874" watchObservedRunningTime="2026-01-24 07:16:25.946319259 +0000 UTC m=+1387.242424492" Jan 24 07:16:26 crc kubenswrapper[4675]: I0124 07:16:26.922569 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:29 crc kubenswrapper[4675]: E0124 07:16:29.579516 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-conmon-0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075.scope\": RecentStats: unable to find data in memory cache]" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.074027 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.184427 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-2vwtf"] Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.184911 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="dnsmasq-dns" containerID="cri-o://0ab4fa2df75231345106926fff79be99c6a1cf266a2f4e1ca9da801dcc25d480" gracePeriod=10 Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.355091 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-4qjxm"] Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.357252 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375010 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375106 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375159 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqlcv\" (UniqueName: \"kubernetes.io/projected/4a4ca579-5173-42d0-8dd8-d287df832c44-kube-api-access-hqlcv\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375256 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375328 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375347 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375366 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-config\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.378462 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-4qjxm"] Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.392091 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.203:5353: connect: connection refused" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477191 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477269 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477290 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477309 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-config\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477375 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477419 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477436 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqlcv\" (UniqueName: \"kubernetes.io/projected/4a4ca579-5173-42d0-8dd8-d287df832c44-kube-api-access-hqlcv\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.478636 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-config\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.478820 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.478854 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.478640 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.479212 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.485386 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.504202 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqlcv\" (UniqueName: \"kubernetes.io/projected/4a4ca579-5173-42d0-8dd8-d287df832c44-kube-api-access-hqlcv\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.682285 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.017586 4675 generic.go:334] "Generic (PLEG): container finished" podID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerID="0ab4fa2df75231345106926fff79be99c6a1cf266a2f4e1ca9da801dcc25d480" exitCode=0 Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.018006 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" event={"ID":"bc9f2853-f671-4647-81df-50314ca5e8a1","Type":"ContainerDied","Data":"0ab4fa2df75231345106926fff79be99c6a1cf266a2f4e1ca9da801dcc25d480"} Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.144479 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-4qjxm"] Jan 24 07:16:34 crc kubenswrapper[4675]: W0124 07:16:34.145371 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a4ca579_5173_42d0_8dd8_d287df832c44.slice/crio-a2f8d59d2bd2cea71d16e9c0b3c82e739dcaff8f41b9d02a609075bcebcb7ce4 WatchSource:0}: Error finding container a2f8d59d2bd2cea71d16e9c0b3c82e739dcaff8f41b9d02a609075bcebcb7ce4: Status 404 returned error can't find the container with id a2f8d59d2bd2cea71d16e9c0b3c82e739dcaff8f41b9d02a609075bcebcb7ce4 Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.183101 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.189896 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5l2t\" (UniqueName: \"kubernetes.io/projected/bc9f2853-f671-4647-81df-50314ca5e8a1-kube-api-access-f5l2t\") pod \"bc9f2853-f671-4647-81df-50314ca5e8a1\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.190009 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-sb\") pod \"bc9f2853-f671-4647-81df-50314ca5e8a1\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.190090 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-nb\") pod \"bc9f2853-f671-4647-81df-50314ca5e8a1\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.190172 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-swift-storage-0\") pod \"bc9f2853-f671-4647-81df-50314ca5e8a1\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.190218 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-svc\") pod \"bc9f2853-f671-4647-81df-50314ca5e8a1\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.190246 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-config\") pod \"bc9f2853-f671-4647-81df-50314ca5e8a1\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.196883 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9f2853-f671-4647-81df-50314ca5e8a1-kube-api-access-f5l2t" (OuterVolumeSpecName: "kube-api-access-f5l2t") pod "bc9f2853-f671-4647-81df-50314ca5e8a1" (UID: "bc9f2853-f671-4647-81df-50314ca5e8a1"). InnerVolumeSpecName "kube-api-access-f5l2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.298629 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5l2t\" (UniqueName: \"kubernetes.io/projected/bc9f2853-f671-4647-81df-50314ca5e8a1-kube-api-access-f5l2t\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.330474 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc9f2853-f671-4647-81df-50314ca5e8a1" (UID: "bc9f2853-f671-4647-81df-50314ca5e8a1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.334139 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-config" (OuterVolumeSpecName: "config") pod "bc9f2853-f671-4647-81df-50314ca5e8a1" (UID: "bc9f2853-f671-4647-81df-50314ca5e8a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.345856 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc9f2853-f671-4647-81df-50314ca5e8a1" (UID: "bc9f2853-f671-4647-81df-50314ca5e8a1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.354117 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc9f2853-f671-4647-81df-50314ca5e8a1" (UID: "bc9f2853-f671-4647-81df-50314ca5e8a1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.375015 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc9f2853-f671-4647-81df-50314ca5e8a1" (UID: "bc9f2853-f671-4647-81df-50314ca5e8a1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.401114 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.401167 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.401182 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.401196 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.401207 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.029389 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" event={"ID":"bc9f2853-f671-4647-81df-50314ca5e8a1","Type":"ContainerDied","Data":"2c3c2a43e5e1f891dc078496766b3dbc527e0916a446f16d71f3f1e737ccce2c"} Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.029446 4675 scope.go:117] "RemoveContainer" containerID="0ab4fa2df75231345106926fff79be99c6a1cf266a2f4e1ca9da801dcc25d480" Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.029592 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.034009 4675 generic.go:334] "Generic (PLEG): container finished" podID="4a4ca579-5173-42d0-8dd8-d287df832c44" containerID="86498a3b6443293315f6b8f373687f3d56f3fc2befa733b480af124f6d0671bb" exitCode=0 Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.034051 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" event={"ID":"4a4ca579-5173-42d0-8dd8-d287df832c44","Type":"ContainerDied","Data":"86498a3b6443293315f6b8f373687f3d56f3fc2befa733b480af124f6d0671bb"} Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.034074 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" event={"ID":"4a4ca579-5173-42d0-8dd8-d287df832c44","Type":"ContainerStarted","Data":"a2f8d59d2bd2cea71d16e9c0b3c82e739dcaff8f41b9d02a609075bcebcb7ce4"} Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.062053 4675 scope.go:117] "RemoveContainer" containerID="e61aa274860730298b3d466a37bbb7b9f9970e99b80ea9db136fc24849710d8e" Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.062447 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-2vwtf"] Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.073710 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-2vwtf"] Jan 24 07:16:36 crc kubenswrapper[4675]: I0124 07:16:36.046016 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" event={"ID":"4a4ca579-5173-42d0-8dd8-d287df832c44","Type":"ContainerStarted","Data":"12f3d450ef2852cdfe84daedbfafdbc1c3a0046155981e95040be8a749a24c4f"} Jan 24 07:16:36 crc kubenswrapper[4675]: I0124 07:16:36.046674 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:36 crc kubenswrapper[4675]: I0124 07:16:36.068183 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" podStartSLOduration=3.068162574 podStartE2EDuration="3.068162574s" podCreationTimestamp="2026-01-24 07:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:16:36.065790637 +0000 UTC m=+1397.361895860" watchObservedRunningTime="2026-01-24 07:16:36.068162574 +0000 UTC m=+1397.364267807" Jan 24 07:16:36 crc kubenswrapper[4675]: I0124 07:16:36.951393 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" path="/var/lib/kubelet/pods/bc9f2853-f671-4647-81df-50314ca5e8a1/volumes" Jan 24 07:16:39 crc kubenswrapper[4675]: E0124 07:16:39.895004 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-conmon-0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075.scope\": RecentStats: unable to find data in memory cache]" Jan 24 07:16:43 crc kubenswrapper[4675]: I0124 07:16:43.683966 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:43 crc kubenswrapper[4675]: I0124 07:16:43.746483 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-9dtm6"] Jan 24 07:16:43 crc kubenswrapper[4675]: I0124 07:16:43.746806 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerName="dnsmasq-dns" containerID="cri-o://0e1889fd923193c5a66c5d433254b77fcdca08e4541adf5af5835fe87dde6d2b" gracePeriod=10 Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.134594 4675 generic.go:334] "Generic (PLEG): container finished" podID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerID="0e1889fd923193c5a66c5d433254b77fcdca08e4541adf5af5835fe87dde6d2b" exitCode=0 Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.134688 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" event={"ID":"0ac6311b-068f-4d1a-9950-f6ad4143ec44","Type":"ContainerDied","Data":"0e1889fd923193c5a66c5d433254b77fcdca08e4541adf5af5835fe87dde6d2b"} Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.251277 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.393741 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-openstack-edpm-ipam\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.393922 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-nb\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.394234 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ztwb\" (UniqueName: \"kubernetes.io/projected/0ac6311b-068f-4d1a-9950-f6ad4143ec44-kube-api-access-9ztwb\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.394281 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-config\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.394384 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-swift-storage-0\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.394451 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-svc\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.394537 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-sb\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.421012 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ac6311b-068f-4d1a-9950-f6ad4143ec44-kube-api-access-9ztwb" (OuterVolumeSpecName: "kube-api-access-9ztwb") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "kube-api-access-9ztwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.452862 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.469422 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.469751 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-config" (OuterVolumeSpecName: "config") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.470642 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.497249 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ztwb\" (UniqueName: \"kubernetes.io/projected/0ac6311b-068f-4d1a-9950-f6ad4143ec44-kube-api-access-9ztwb\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.497307 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.497317 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.497325 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.497335 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.501316 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.511248 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.599263 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.599692 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:45 crc kubenswrapper[4675]: I0124 07:16:45.145376 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" event={"ID":"0ac6311b-068f-4d1a-9950-f6ad4143ec44","Type":"ContainerDied","Data":"6cc48653ebd609e683557268147c418ae47794de31786e47b744c830eff868d6"} Jan 24 07:16:45 crc kubenswrapper[4675]: I0124 07:16:45.145438 4675 scope.go:117] "RemoveContainer" containerID="0e1889fd923193c5a66c5d433254b77fcdca08e4541adf5af5835fe87dde6d2b" Jan 24 07:16:45 crc kubenswrapper[4675]: I0124 07:16:45.145605 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:45 crc kubenswrapper[4675]: I0124 07:16:45.173737 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-9dtm6"] Jan 24 07:16:45 crc kubenswrapper[4675]: I0124 07:16:45.176580 4675 scope.go:117] "RemoveContainer" containerID="35d6db473d64d478e0b32272afc04578101f4641988ab25761ac4fafc2485424" Jan 24 07:16:45 crc kubenswrapper[4675]: I0124 07:16:45.191671 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-9dtm6"] Jan 24 07:16:46 crc kubenswrapper[4675]: I0124 07:16:46.954313 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" path="/var/lib/kubelet/pods/0ac6311b-068f-4d1a-9950-f6ad4143ec44/volumes" Jan 24 07:16:50 crc kubenswrapper[4675]: E0124 07:16:50.137368 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-conmon-0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075.scope\": RecentStats: unable to find data in memory cache]" Jan 24 07:16:55 crc kubenswrapper[4675]: I0124 07:16:55.227136 4675 generic.go:334] "Generic (PLEG): container finished" podID="3fd85775-321f-4647-95b6-773ec82811e0" containerID="75f405fdae86dd78c23a70324d2fe9b92658e5ef111d4ed788628deca09cdb34" exitCode=0 Jan 24 07:16:55 crc kubenswrapper[4675]: I0124 07:16:55.227700 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3fd85775-321f-4647-95b6-773ec82811e0","Type":"ContainerDied","Data":"75f405fdae86dd78c23a70324d2fe9b92658e5ef111d4ed788628deca09cdb34"} Jan 24 07:16:56 crc kubenswrapper[4675]: I0124 07:16:56.255358 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3fd85775-321f-4647-95b6-773ec82811e0","Type":"ContainerStarted","Data":"7c8fcdf3763fe30ff360bf453fb6a0ce2f3b917e5ff553be71c10483b879ccbe"} Jan 24 07:16:56 crc kubenswrapper[4675]: I0124 07:16:56.257460 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 24 07:16:57 crc kubenswrapper[4675]: I0124 07:16:57.265791 4675 generic.go:334] "Generic (PLEG): container finished" podID="7c146e5e-4709-4401-a5eb-522609573260" containerID="4c881fc8ce20aba2f577450cba95424c89b08fbe29954e129e4e033f86adfdfd" exitCode=0 Jan 24 07:16:57 crc kubenswrapper[4675]: I0124 07:16:57.265881 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7c146e5e-4709-4401-a5eb-522609573260","Type":"ContainerDied","Data":"4c881fc8ce20aba2f577450cba95424c89b08fbe29954e129e4e033f86adfdfd"} Jan 24 07:16:57 crc kubenswrapper[4675]: I0124 07:16:57.302690 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.302671333 podStartE2EDuration="37.302671333s" podCreationTimestamp="2026-01-24 07:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:16:56.285998199 +0000 UTC m=+1417.582103502" watchObservedRunningTime="2026-01-24 07:16:57.302671333 +0000 UTC m=+1418.598776556" Jan 24 07:16:58 crc kubenswrapper[4675]: I0124 07:16:58.277373 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7c146e5e-4709-4401-a5eb-522609573260","Type":"ContainerStarted","Data":"19e4eb8abb72ccc8dce2d9807941591415dc8a36308c8e2e50bbe505ff9609f1"} Jan 24 07:16:58 crc kubenswrapper[4675]: I0124 07:16:58.278042 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:58 crc kubenswrapper[4675]: I0124 07:16:58.305347 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.305333637 podStartE2EDuration="37.305333637s" podCreationTimestamp="2026-01-24 07:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:16:58.303970794 +0000 UTC m=+1419.600076007" watchObservedRunningTime="2026-01-24 07:16:58.305333637 +0000 UTC m=+1419.601438850" Jan 24 07:17:00 crc kubenswrapper[4675]: E0124 07:17:00.358525 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-conmon-0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075.scope\": RecentStats: unable to find data in memory cache]" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111128 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q"] Jan 24 07:17:02 crc kubenswrapper[4675]: E0124 07:17:02.111688 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerName="dnsmasq-dns" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111701 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerName="dnsmasq-dns" Jan 24 07:17:02 crc kubenswrapper[4675]: E0124 07:17:02.111738 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="init" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111744 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="init" Jan 24 07:17:02 crc kubenswrapper[4675]: E0124 07:17:02.111758 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerName="init" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111764 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerName="init" Jan 24 07:17:02 crc kubenswrapper[4675]: E0124 07:17:02.111781 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="dnsmasq-dns" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111786 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="dnsmasq-dns" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111941 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerName="dnsmasq-dns" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111961 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="dnsmasq-dns" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.112499 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.115382 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.115957 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.116666 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.119003 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq455\" (UniqueName: \"kubernetes.io/projected/774fb762-6506-4e0c-9732-9208f7802057-kube-api-access-sq455\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.119072 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.119130 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.119275 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.123187 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.138342 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q"] Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.221046 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.221121 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq455\" (UniqueName: \"kubernetes.io/projected/774fb762-6506-4e0c-9732-9208f7802057-kube-api-access-sq455\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.221158 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.221204 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.226788 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.227136 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.237884 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.241584 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq455\" (UniqueName: \"kubernetes.io/projected/774fb762-6506-4e0c-9732-9208f7802057-kube-api-access-sq455\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.430646 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:03 crc kubenswrapper[4675]: I0124 07:17:03.039981 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q"] Jan 24 07:17:03 crc kubenswrapper[4675]: I0124 07:17:03.335413 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" event={"ID":"774fb762-6506-4e0c-9732-9208f7802057","Type":"ContainerStarted","Data":"e3202a37ac290012e4933ff508f6e23ba2f74d0bf8dba51ccf4c15a7dc208324"} Jan 24 07:17:10 crc kubenswrapper[4675]: I0124 07:17:10.477888 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 24 07:17:10 crc kubenswrapper[4675]: E0124 07:17:10.635014 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-conmon-0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075.scope\": RecentStats: unable to find data in memory cache]" Jan 24 07:17:12 crc kubenswrapper[4675]: I0124 07:17:12.256919 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:17:15 crc kubenswrapper[4675]: I0124 07:17:15.446887 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" event={"ID":"774fb762-6506-4e0c-9732-9208f7802057","Type":"ContainerStarted","Data":"5e2d98b8ecebf9f363f40de1fe27344c08503ba956ccee3d4ce6f6ac032b1338"} Jan 24 07:17:15 crc kubenswrapper[4675]: I0124 07:17:15.469847 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" podStartSLOduration=2.038826025 podStartE2EDuration="13.469826637s" podCreationTimestamp="2026-01-24 07:17:02 +0000 UTC" firstStartedPulling="2026-01-24 07:17:03.041690905 +0000 UTC m=+1424.337796128" lastFinishedPulling="2026-01-24 07:17:14.472691517 +0000 UTC m=+1435.768796740" observedRunningTime="2026-01-24 07:17:15.465936483 +0000 UTC m=+1436.762041716" watchObservedRunningTime="2026-01-24 07:17:15.469826637 +0000 UTC m=+1436.765931860" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.076284 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ffsqt"] Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.078501 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.115517 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffsqt"] Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.204664 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-catalog-content\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.204730 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvx8q\" (UniqueName: \"kubernetes.io/projected/215a25e3-9603-4b01-b578-c5f6883fd589-kube-api-access-zvx8q\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.204827 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-utilities\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.306184 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-catalog-content\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.306257 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvx8q\" (UniqueName: \"kubernetes.io/projected/215a25e3-9603-4b01-b578-c5f6883fd589-kube-api-access-zvx8q\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.306293 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-utilities\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.306787 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-utilities\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.307019 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-catalog-content\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.331432 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvx8q\" (UniqueName: \"kubernetes.io/projected/215a25e3-9603-4b01-b578-c5f6883fd589-kube-api-access-zvx8q\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.398939 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.548875 4675 generic.go:334] "Generic (PLEG): container finished" podID="774fb762-6506-4e0c-9732-9208f7802057" containerID="5e2d98b8ecebf9f363f40de1fe27344c08503ba956ccee3d4ce6f6ac032b1338" exitCode=0 Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.548918 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" event={"ID":"774fb762-6506-4e0c-9732-9208f7802057","Type":"ContainerDied","Data":"5e2d98b8ecebf9f363f40de1fe27344c08503ba956ccee3d4ce6f6ac032b1338"} Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.929561 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffsqt"] Jan 24 07:17:27 crc kubenswrapper[4675]: I0124 07:17:27.560309 4675 generic.go:334] "Generic (PLEG): container finished" podID="215a25e3-9603-4b01-b578-c5f6883fd589" containerID="5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04" exitCode=0 Jan 24 07:17:27 crc kubenswrapper[4675]: I0124 07:17:27.560386 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerDied","Data":"5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04"} Jan 24 07:17:27 crc kubenswrapper[4675]: I0124 07:17:27.560652 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerStarted","Data":"772e3cfb34ff48595caf0e6b29650847a506a8da6d2a6bdb2d081faf4a3cd015"} Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.019092 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.047963 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-inventory\") pod \"774fb762-6506-4e0c-9732-9208f7802057\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.048311 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-repo-setup-combined-ca-bundle\") pod \"774fb762-6506-4e0c-9732-9208f7802057\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.048584 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-ssh-key-openstack-edpm-ipam\") pod \"774fb762-6506-4e0c-9732-9208f7802057\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.048656 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq455\" (UniqueName: \"kubernetes.io/projected/774fb762-6506-4e0c-9732-9208f7802057-kube-api-access-sq455\") pod \"774fb762-6506-4e0c-9732-9208f7802057\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.055884 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774fb762-6506-4e0c-9732-9208f7802057-kube-api-access-sq455" (OuterVolumeSpecName: "kube-api-access-sq455") pod "774fb762-6506-4e0c-9732-9208f7802057" (UID: "774fb762-6506-4e0c-9732-9208f7802057"). InnerVolumeSpecName "kube-api-access-sq455". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.058488 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "774fb762-6506-4e0c-9732-9208f7802057" (UID: "774fb762-6506-4e0c-9732-9208f7802057"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.077838 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "774fb762-6506-4e0c-9732-9208f7802057" (UID: "774fb762-6506-4e0c-9732-9208f7802057"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.081115 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-inventory" (OuterVolumeSpecName: "inventory") pod "774fb762-6506-4e0c-9732-9208f7802057" (UID: "774fb762-6506-4e0c-9732-9208f7802057"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.151594 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.151656 4675 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.151675 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.151690 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq455\" (UniqueName: \"kubernetes.io/projected/774fb762-6506-4e0c-9732-9208f7802057-kube-api-access-sq455\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.578300 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerStarted","Data":"ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698"} Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.594838 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.596966 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" event={"ID":"774fb762-6506-4e0c-9732-9208f7802057","Type":"ContainerDied","Data":"e3202a37ac290012e4933ff508f6e23ba2f74d0bf8dba51ccf4c15a7dc208324"} Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.597002 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3202a37ac290012e4933ff508f6e23ba2f74d0bf8dba51ccf4c15a7dc208324" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.744772 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln"] Jan 24 07:17:28 crc kubenswrapper[4675]: E0124 07:17:28.745536 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="774fb762-6506-4e0c-9732-9208f7802057" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.745556 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="774fb762-6506-4e0c-9732-9208f7802057" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.745757 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="774fb762-6506-4e0c-9732-9208f7802057" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.746464 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.751567 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.752383 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.752784 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.752942 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.754909 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln"] Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.778263 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps5qq\" (UniqueName: \"kubernetes.io/projected/55150857-7da2-4609-84be-9cbaa28141ed-kube-api-access-ps5qq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.778554 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.778801 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.880731 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.881132 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.881243 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps5qq\" (UniqueName: \"kubernetes.io/projected/55150857-7da2-4609-84be-9cbaa28141ed-kube-api-access-ps5qq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.887016 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.887501 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.899366 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps5qq\" (UniqueName: \"kubernetes.io/projected/55150857-7da2-4609-84be-9cbaa28141ed-kube-api-access-ps5qq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:29 crc kubenswrapper[4675]: I0124 07:17:29.074196 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:29 crc kubenswrapper[4675]: I0124 07:17:29.636811 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln"] Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.615505 4675 generic.go:334] "Generic (PLEG): container finished" podID="215a25e3-9603-4b01-b578-c5f6883fd589" containerID="ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698" exitCode=0 Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.615593 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerDied","Data":"ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698"} Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.618776 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" event={"ID":"55150857-7da2-4609-84be-9cbaa28141ed","Type":"ContainerStarted","Data":"453b3ef2649da6a1c50b0dc98d40d38633741b39bc391c1b8a4378bfcbb7db66"} Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.641018 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-27f4l"] Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.642825 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.685053 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27f4l"] Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.716335 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-utilities\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.716689 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-catalog-content\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.716921 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnjgs\" (UniqueName: \"kubernetes.io/projected/f64f926c-c118-46ad-80c6-a51e8e235362-kube-api-access-wnjgs\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.818537 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-catalog-content\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.818614 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnjgs\" (UniqueName: \"kubernetes.io/projected/f64f926c-c118-46ad-80c6-a51e8e235362-kube-api-access-wnjgs\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.818667 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-utilities\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.819144 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-catalog-content\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.819292 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-utilities\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.840568 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnjgs\" (UniqueName: \"kubernetes.io/projected/f64f926c-c118-46ad-80c6-a51e8e235362-kube-api-access-wnjgs\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.967989 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:31 crc kubenswrapper[4675]: W0124 07:17:31.613742 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf64f926c_c118_46ad_80c6_a51e8e235362.slice/crio-4f762e7703f2c35c06e17874ec87652c7686051216a527c399a2b0350fb4dbf6 WatchSource:0}: Error finding container 4f762e7703f2c35c06e17874ec87652c7686051216a527c399a2b0350fb4dbf6: Status 404 returned error can't find the container with id 4f762e7703f2c35c06e17874ec87652c7686051216a527c399a2b0350fb4dbf6 Jan 24 07:17:31 crc kubenswrapper[4675]: I0124 07:17:31.614220 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27f4l"] Jan 24 07:17:31 crc kubenswrapper[4675]: I0124 07:17:31.629969 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerStarted","Data":"1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3"} Jan 24 07:17:31 crc kubenswrapper[4675]: I0124 07:17:31.631767 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" event={"ID":"55150857-7da2-4609-84be-9cbaa28141ed","Type":"ContainerStarted","Data":"9d44c74646a51dd3d2e85a8e39a474b7cca6ee83fc282eea72fa4e3a554243fe"} Jan 24 07:17:31 crc kubenswrapper[4675]: I0124 07:17:31.633519 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerStarted","Data":"4f762e7703f2c35c06e17874ec87652c7686051216a527c399a2b0350fb4dbf6"} Jan 24 07:17:31 crc kubenswrapper[4675]: I0124 07:17:31.666958 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ffsqt" podStartSLOduration=2.172077779 podStartE2EDuration="5.666935989s" podCreationTimestamp="2026-01-24 07:17:26 +0000 UTC" firstStartedPulling="2026-01-24 07:17:27.563956672 +0000 UTC m=+1448.860061905" lastFinishedPulling="2026-01-24 07:17:31.058814892 +0000 UTC m=+1452.354920115" observedRunningTime="2026-01-24 07:17:31.651784022 +0000 UTC m=+1452.947889255" watchObservedRunningTime="2026-01-24 07:17:31.666935989 +0000 UTC m=+1452.963041212" Jan 24 07:17:31 crc kubenswrapper[4675]: I0124 07:17:31.689492 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" podStartSLOduration=2.482506016 podStartE2EDuration="3.689474895s" podCreationTimestamp="2026-01-24 07:17:28 +0000 UTC" firstStartedPulling="2026-01-24 07:17:29.642811124 +0000 UTC m=+1450.938916347" lastFinishedPulling="2026-01-24 07:17:30.849780003 +0000 UTC m=+1452.145885226" observedRunningTime="2026-01-24 07:17:31.675360083 +0000 UTC m=+1452.971465316" watchObservedRunningTime="2026-01-24 07:17:31.689474895 +0000 UTC m=+1452.985580118" Jan 24 07:17:32 crc kubenswrapper[4675]: I0124 07:17:32.647605 4675 generic.go:334] "Generic (PLEG): container finished" podID="f64f926c-c118-46ad-80c6-a51e8e235362" containerID="6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656" exitCode=0 Jan 24 07:17:32 crc kubenswrapper[4675]: I0124 07:17:32.647729 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerDied","Data":"6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656"} Jan 24 07:17:34 crc kubenswrapper[4675]: I0124 07:17:34.674857 4675 generic.go:334] "Generic (PLEG): container finished" podID="55150857-7da2-4609-84be-9cbaa28141ed" containerID="9d44c74646a51dd3d2e85a8e39a474b7cca6ee83fc282eea72fa4e3a554243fe" exitCode=0 Jan 24 07:17:34 crc kubenswrapper[4675]: I0124 07:17:34.675248 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" event={"ID":"55150857-7da2-4609-84be-9cbaa28141ed","Type":"ContainerDied","Data":"9d44c74646a51dd3d2e85a8e39a474b7cca6ee83fc282eea72fa4e3a554243fe"} Jan 24 07:17:34 crc kubenswrapper[4675]: I0124 07:17:34.681660 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerStarted","Data":"dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479"} Jan 24 07:17:35 crc kubenswrapper[4675]: I0124 07:17:35.693346 4675 generic.go:334] "Generic (PLEG): container finished" podID="f64f926c-c118-46ad-80c6-a51e8e235362" containerID="dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479" exitCode=0 Jan 24 07:17:35 crc kubenswrapper[4675]: I0124 07:17:35.693426 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerDied","Data":"dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479"} Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.084098 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.147641 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps5qq\" (UniqueName: \"kubernetes.io/projected/55150857-7da2-4609-84be-9cbaa28141ed-kube-api-access-ps5qq\") pod \"55150857-7da2-4609-84be-9cbaa28141ed\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.147766 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-inventory\") pod \"55150857-7da2-4609-84be-9cbaa28141ed\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.147971 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-ssh-key-openstack-edpm-ipam\") pod \"55150857-7da2-4609-84be-9cbaa28141ed\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.176661 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55150857-7da2-4609-84be-9cbaa28141ed-kube-api-access-ps5qq" (OuterVolumeSpecName: "kube-api-access-ps5qq") pod "55150857-7da2-4609-84be-9cbaa28141ed" (UID: "55150857-7da2-4609-84be-9cbaa28141ed"). InnerVolumeSpecName "kube-api-access-ps5qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.178277 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "55150857-7da2-4609-84be-9cbaa28141ed" (UID: "55150857-7da2-4609-84be-9cbaa28141ed"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.189370 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-inventory" (OuterVolumeSpecName: "inventory") pod "55150857-7da2-4609-84be-9cbaa28141ed" (UID: "55150857-7da2-4609-84be-9cbaa28141ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.250327 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps5qq\" (UniqueName: \"kubernetes.io/projected/55150857-7da2-4609-84be-9cbaa28141ed-kube-api-access-ps5qq\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.250488 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.250543 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.399123 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.399175 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.711063 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" event={"ID":"55150857-7da2-4609-84be-9cbaa28141ed","Type":"ContainerDied","Data":"453b3ef2649da6a1c50b0dc98d40d38633741b39bc391c1b8a4378bfcbb7db66"} Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.711376 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="453b3ef2649da6a1c50b0dc98d40d38633741b39bc391c1b8a4378bfcbb7db66" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.711429 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.730159 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerStarted","Data":"43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652"} Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.756167 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-27f4l" podStartSLOduration=3.259693044 podStartE2EDuration="6.756149703s" podCreationTimestamp="2026-01-24 07:17:30 +0000 UTC" firstStartedPulling="2026-01-24 07:17:32.649531817 +0000 UTC m=+1453.945637040" lastFinishedPulling="2026-01-24 07:17:36.145988476 +0000 UTC m=+1457.442093699" observedRunningTime="2026-01-24 07:17:36.754911153 +0000 UTC m=+1458.051016376" watchObservedRunningTime="2026-01-24 07:17:36.756149703 +0000 UTC m=+1458.052254926" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.794278 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw"] Jan 24 07:17:36 crc kubenswrapper[4675]: E0124 07:17:36.794813 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55150857-7da2-4609-84be-9cbaa28141ed" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.794838 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="55150857-7da2-4609-84be-9cbaa28141ed" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.795053 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="55150857-7da2-4609-84be-9cbaa28141ed" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.795846 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.804349 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.804817 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.805145 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.805185 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.814971 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw"] Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.860847 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb7dl\" (UniqueName: \"kubernetes.io/projected/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-kube-api-access-nb7dl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.860932 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.861013 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.861050 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.962843 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.962950 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.962981 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.963127 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb7dl\" (UniqueName: \"kubernetes.io/projected/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-kube-api-access-nb7dl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.969552 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.970564 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.971124 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.981069 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb7dl\" (UniqueName: \"kubernetes.io/projected/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-kube-api-access-nb7dl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:37 crc kubenswrapper[4675]: I0124 07:17:37.121271 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:37 crc kubenswrapper[4675]: I0124 07:17:37.453916 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw"] Jan 24 07:17:37 crc kubenswrapper[4675]: I0124 07:17:37.467690 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ffsqt" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="registry-server" probeResult="failure" output=< Jan 24 07:17:37 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:17:37 crc kubenswrapper[4675]: > Jan 24 07:17:37 crc kubenswrapper[4675]: I0124 07:17:37.757855 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" event={"ID":"e9b8f08b-6ece-4b46-86c0-9c353d61c50c","Type":"ContainerStarted","Data":"57432d5c7d3510a9357d6e2e9e14bd4d88d3c826de5299149d5d3818512d7037"} Jan 24 07:17:38 crc kubenswrapper[4675]: I0124 07:17:38.768162 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" event={"ID":"e9b8f08b-6ece-4b46-86c0-9c353d61c50c","Type":"ContainerStarted","Data":"ce27a8857b567fc180eec8926ea79a194563de9a40fccc96fae87fba64bf0d79"} Jan 24 07:17:38 crc kubenswrapper[4675]: I0124 07:17:38.793949 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" podStartSLOduration=2.318964092 podStartE2EDuration="2.79392643s" podCreationTimestamp="2026-01-24 07:17:36 +0000 UTC" firstStartedPulling="2026-01-24 07:17:37.460182916 +0000 UTC m=+1458.756288139" lastFinishedPulling="2026-01-24 07:17:37.935145254 +0000 UTC m=+1459.231250477" observedRunningTime="2026-01-24 07:17:38.783487086 +0000 UTC m=+1460.079592299" watchObservedRunningTime="2026-01-24 07:17:38.79392643 +0000 UTC m=+1460.090031653" Jan 24 07:17:38 crc kubenswrapper[4675]: I0124 07:17:38.918513 4675 scope.go:117] "RemoveContainer" containerID="cf93369f45b95439f48ef44ae1c4d7acc85ac8a88c7301daa8df8a93d1811848" Jan 24 07:17:38 crc kubenswrapper[4675]: I0124 07:17:38.950043 4675 scope.go:117] "RemoveContainer" containerID="d4a048de2d3fd4b88b2f20e705ec4ca23a40e930bd260b2ef49c5084f7b87b5b" Jan 24 07:17:39 crc kubenswrapper[4675]: I0124 07:17:39.006896 4675 scope.go:117] "RemoveContainer" containerID="874bbdad57146cc137ef4243de0a7736d7fb10ae05c52ce16c88dd3f2052c38a" Jan 24 07:17:40 crc kubenswrapper[4675]: I0124 07:17:40.969269 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:40 crc kubenswrapper[4675]: I0124 07:17:40.969758 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:41 crc kubenswrapper[4675]: I0124 07:17:41.024878 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:41 crc kubenswrapper[4675]: I0124 07:17:41.851487 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:41 crc kubenswrapper[4675]: I0124 07:17:41.897963 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27f4l"] Jan 24 07:17:43 crc kubenswrapper[4675]: I0124 07:17:43.809539 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-27f4l" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="registry-server" containerID="cri-o://43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652" gracePeriod=2 Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.283873 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.416106 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-utilities\") pod \"f64f926c-c118-46ad-80c6-a51e8e235362\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.416163 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnjgs\" (UniqueName: \"kubernetes.io/projected/f64f926c-c118-46ad-80c6-a51e8e235362-kube-api-access-wnjgs\") pod \"f64f926c-c118-46ad-80c6-a51e8e235362\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.416371 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-catalog-content\") pod \"f64f926c-c118-46ad-80c6-a51e8e235362\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.421350 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-utilities" (OuterVolumeSpecName: "utilities") pod "f64f926c-c118-46ad-80c6-a51e8e235362" (UID: "f64f926c-c118-46ad-80c6-a51e8e235362"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.421818 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f64f926c-c118-46ad-80c6-a51e8e235362-kube-api-access-wnjgs" (OuterVolumeSpecName: "kube-api-access-wnjgs") pod "f64f926c-c118-46ad-80c6-a51e8e235362" (UID: "f64f926c-c118-46ad-80c6-a51e8e235362"). InnerVolumeSpecName "kube-api-access-wnjgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.447098 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f64f926c-c118-46ad-80c6-a51e8e235362" (UID: "f64f926c-c118-46ad-80c6-a51e8e235362"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.519475 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.519524 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnjgs\" (UniqueName: \"kubernetes.io/projected/f64f926c-c118-46ad-80c6-a51e8e235362-kube-api-access-wnjgs\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.519543 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.827286 4675 generic.go:334] "Generic (PLEG): container finished" podID="f64f926c-c118-46ad-80c6-a51e8e235362" containerID="43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652" exitCode=0 Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.827369 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerDied","Data":"43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652"} Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.827386 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.828763 4675 scope.go:117] "RemoveContainer" containerID="43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.829800 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerDied","Data":"4f762e7703f2c35c06e17874ec87652c7686051216a527c399a2b0350fb4dbf6"} Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.864586 4675 scope.go:117] "RemoveContainer" containerID="dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.892146 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27f4l"] Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.892396 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-27f4l"] Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.915109 4675 scope.go:117] "RemoveContainer" containerID="6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.944920 4675 scope.go:117] "RemoveContainer" containerID="43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652" Jan 24 07:17:44 crc kubenswrapper[4675]: E0124 07:17:44.945834 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652\": container with ID starting with 43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652 not found: ID does not exist" containerID="43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.945875 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652"} err="failed to get container status \"43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652\": rpc error: code = NotFound desc = could not find container \"43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652\": container with ID starting with 43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652 not found: ID does not exist" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.945899 4675 scope.go:117] "RemoveContainer" containerID="dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479" Jan 24 07:17:44 crc kubenswrapper[4675]: E0124 07:17:44.946413 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479\": container with ID starting with dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479 not found: ID does not exist" containerID="dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.946471 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479"} err="failed to get container status \"dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479\": rpc error: code = NotFound desc = could not find container \"dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479\": container with ID starting with dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479 not found: ID does not exist" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.946500 4675 scope.go:117] "RemoveContainer" containerID="6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656" Jan 24 07:17:44 crc kubenswrapper[4675]: E0124 07:17:44.946899 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656\": container with ID starting with 6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656 not found: ID does not exist" containerID="6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.947140 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656"} err="failed to get container status \"6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656\": rpc error: code = NotFound desc = could not find container \"6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656\": container with ID starting with 6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656 not found: ID does not exist" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.957538 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" path="/var/lib/kubelet/pods/f64f926c-c118-46ad-80c6-a51e8e235362/volumes" Jan 24 07:17:46 crc kubenswrapper[4675]: I0124 07:17:46.447655 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:46 crc kubenswrapper[4675]: I0124 07:17:46.498742 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:47 crc kubenswrapper[4675]: I0124 07:17:47.659087 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ffsqt"] Jan 24 07:17:47 crc kubenswrapper[4675]: I0124 07:17:47.852439 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ffsqt" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="registry-server" containerID="cri-o://1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3" gracePeriod=2 Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.309618 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.393124 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-catalog-content\") pod \"215a25e3-9603-4b01-b578-c5f6883fd589\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.393233 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvx8q\" (UniqueName: \"kubernetes.io/projected/215a25e3-9603-4b01-b578-c5f6883fd589-kube-api-access-zvx8q\") pod \"215a25e3-9603-4b01-b578-c5f6883fd589\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.393331 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-utilities\") pod \"215a25e3-9603-4b01-b578-c5f6883fd589\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.394928 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-utilities" (OuterVolumeSpecName: "utilities") pod "215a25e3-9603-4b01-b578-c5f6883fd589" (UID: "215a25e3-9603-4b01-b578-c5f6883fd589"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.401365 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215a25e3-9603-4b01-b578-c5f6883fd589-kube-api-access-zvx8q" (OuterVolumeSpecName: "kube-api-access-zvx8q") pod "215a25e3-9603-4b01-b578-c5f6883fd589" (UID: "215a25e3-9603-4b01-b578-c5f6883fd589"). InnerVolumeSpecName "kube-api-access-zvx8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.436118 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "215a25e3-9603-4b01-b578-c5f6883fd589" (UID: "215a25e3-9603-4b01-b578-c5f6883fd589"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.495276 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvx8q\" (UniqueName: \"kubernetes.io/projected/215a25e3-9603-4b01-b578-c5f6883fd589-kube-api-access-zvx8q\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.495304 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.495314 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.863091 4675 generic.go:334] "Generic (PLEG): container finished" podID="215a25e3-9603-4b01-b578-c5f6883fd589" containerID="1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3" exitCode=0 Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.863134 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerDied","Data":"1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3"} Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.863161 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerDied","Data":"772e3cfb34ff48595caf0e6b29650847a506a8da6d2a6bdb2d081faf4a3cd015"} Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.863179 4675 scope.go:117] "RemoveContainer" containerID="1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.863301 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.889741 4675 scope.go:117] "RemoveContainer" containerID="ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.905133 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ffsqt"] Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.910998 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ffsqt"] Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.920197 4675 scope.go:117] "RemoveContainer" containerID="5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.961536 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" path="/var/lib/kubelet/pods/215a25e3-9603-4b01-b578-c5f6883fd589/volumes" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.964785 4675 scope.go:117] "RemoveContainer" containerID="1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3" Jan 24 07:17:48 crc kubenswrapper[4675]: E0124 07:17:48.965755 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3\": container with ID starting with 1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3 not found: ID does not exist" containerID="1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.965802 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3"} err="failed to get container status \"1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3\": rpc error: code = NotFound desc = could not find container \"1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3\": container with ID starting with 1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3 not found: ID does not exist" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.965830 4675 scope.go:117] "RemoveContainer" containerID="ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698" Jan 24 07:17:48 crc kubenswrapper[4675]: E0124 07:17:48.966230 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698\": container with ID starting with ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698 not found: ID does not exist" containerID="ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.966332 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698"} err="failed to get container status \"ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698\": rpc error: code = NotFound desc = could not find container \"ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698\": container with ID starting with ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698 not found: ID does not exist" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.966377 4675 scope.go:117] "RemoveContainer" containerID="5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04" Jan 24 07:17:48 crc kubenswrapper[4675]: E0124 07:17:48.966641 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04\": container with ID starting with 5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04 not found: ID does not exist" containerID="5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.966667 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04"} err="failed to get container status \"5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04\": rpc error: code = NotFound desc = could not find container \"5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04\": container with ID starting with 5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04 not found: ID does not exist" Jan 24 07:18:08 crc kubenswrapper[4675]: I0124 07:18:08.630214 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:18:08 crc kubenswrapper[4675]: I0124 07:18:08.631159 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.906417 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s7cw6"] Jan 24 07:18:10 crc kubenswrapper[4675]: E0124 07:18:10.907422 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="registry-server" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907444 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="registry-server" Jan 24 07:18:10 crc kubenswrapper[4675]: E0124 07:18:10.907469 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="extract-content" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907481 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="extract-content" Jan 24 07:18:10 crc kubenswrapper[4675]: E0124 07:18:10.907504 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="extract-utilities" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907517 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="extract-utilities" Jan 24 07:18:10 crc kubenswrapper[4675]: E0124 07:18:10.907534 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="registry-server" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907546 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="registry-server" Jan 24 07:18:10 crc kubenswrapper[4675]: E0124 07:18:10.907623 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="extract-content" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907636 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="extract-content" Jan 24 07:18:10 crc kubenswrapper[4675]: E0124 07:18:10.907657 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="extract-utilities" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907668 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="extract-utilities" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907969 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="registry-server" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.908014 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="registry-server" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.909953 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.937174 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s7cw6"] Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.967755 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6t4s\" (UniqueName: \"kubernetes.io/projected/0d407256-826f-449b-bc5d-c7ba87f55424-kube-api-access-x6t4s\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.967888 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-catalog-content\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.967997 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-utilities\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.069300 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-catalog-content\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.069444 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-utilities\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.069499 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6t4s\" (UniqueName: \"kubernetes.io/projected/0d407256-826f-449b-bc5d-c7ba87f55424-kube-api-access-x6t4s\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.069977 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-utilities\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.070072 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-catalog-content\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.088579 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6t4s\" (UniqueName: \"kubernetes.io/projected/0d407256-826f-449b-bc5d-c7ba87f55424-kube-api-access-x6t4s\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.231108 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.687751 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s7cw6"] Jan 24 07:18:12 crc kubenswrapper[4675]: I0124 07:18:12.103669 4675 generic.go:334] "Generic (PLEG): container finished" podID="0d407256-826f-449b-bc5d-c7ba87f55424" containerID="158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866" exitCode=0 Jan 24 07:18:12 crc kubenswrapper[4675]: I0124 07:18:12.103765 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerDied","Data":"158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866"} Jan 24 07:18:12 crc kubenswrapper[4675]: I0124 07:18:12.104823 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerStarted","Data":"7a32005f4223a80017a570cbb6c1d2140e4463bb0b2d53f0d3bc689f002806a6"} Jan 24 07:18:14 crc kubenswrapper[4675]: I0124 07:18:14.123274 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerStarted","Data":"79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418"} Jan 24 07:18:15 crc kubenswrapper[4675]: I0124 07:18:15.133469 4675 generic.go:334] "Generic (PLEG): container finished" podID="0d407256-826f-449b-bc5d-c7ba87f55424" containerID="79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418" exitCode=0 Jan 24 07:18:15 crc kubenswrapper[4675]: I0124 07:18:15.133558 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerDied","Data":"79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418"} Jan 24 07:18:17 crc kubenswrapper[4675]: I0124 07:18:17.158510 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerStarted","Data":"8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8"} Jan 24 07:18:17 crc kubenswrapper[4675]: I0124 07:18:17.188064 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s7cw6" podStartSLOduration=2.950308405 podStartE2EDuration="7.188039981s" podCreationTimestamp="2026-01-24 07:18:10 +0000 UTC" firstStartedPulling="2026-01-24 07:18:12.105500699 +0000 UTC m=+1493.401605922" lastFinishedPulling="2026-01-24 07:18:16.343232275 +0000 UTC m=+1497.639337498" observedRunningTime="2026-01-24 07:18:17.182263541 +0000 UTC m=+1498.478368794" watchObservedRunningTime="2026-01-24 07:18:17.188039981 +0000 UTC m=+1498.484145244" Jan 24 07:18:21 crc kubenswrapper[4675]: I0124 07:18:21.232214 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:21 crc kubenswrapper[4675]: I0124 07:18:21.232914 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:21 crc kubenswrapper[4675]: I0124 07:18:21.288412 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:22 crc kubenswrapper[4675]: I0124 07:18:22.289379 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:22 crc kubenswrapper[4675]: I0124 07:18:22.353641 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s7cw6"] Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.221903 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s7cw6" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="registry-server" containerID="cri-o://8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8" gracePeriod=2 Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.674662 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.855266 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-utilities\") pod \"0d407256-826f-449b-bc5d-c7ba87f55424\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.855322 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6t4s\" (UniqueName: \"kubernetes.io/projected/0d407256-826f-449b-bc5d-c7ba87f55424-kube-api-access-x6t4s\") pod \"0d407256-826f-449b-bc5d-c7ba87f55424\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.855585 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-catalog-content\") pod \"0d407256-826f-449b-bc5d-c7ba87f55424\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.856102 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-utilities" (OuterVolumeSpecName: "utilities") pod "0d407256-826f-449b-bc5d-c7ba87f55424" (UID: "0d407256-826f-449b-bc5d-c7ba87f55424"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.865528 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d407256-826f-449b-bc5d-c7ba87f55424-kube-api-access-x6t4s" (OuterVolumeSpecName: "kube-api-access-x6t4s") pod "0d407256-826f-449b-bc5d-c7ba87f55424" (UID: "0d407256-826f-449b-bc5d-c7ba87f55424"). InnerVolumeSpecName "kube-api-access-x6t4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.914986 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d407256-826f-449b-bc5d-c7ba87f55424" (UID: "0d407256-826f-449b-bc5d-c7ba87f55424"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.958609 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.958637 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.958646 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6t4s\" (UniqueName: \"kubernetes.io/projected/0d407256-826f-449b-bc5d-c7ba87f55424-kube-api-access-x6t4s\") on node \"crc\" DevicePath \"\"" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.245853 4675 generic.go:334] "Generic (PLEG): container finished" podID="0d407256-826f-449b-bc5d-c7ba87f55424" containerID="8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8" exitCode=0 Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.245943 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.245935 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerDied","Data":"8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8"} Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.246314 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerDied","Data":"7a32005f4223a80017a570cbb6c1d2140e4463bb0b2d53f0d3bc689f002806a6"} Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.246336 4675 scope.go:117] "RemoveContainer" containerID="8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.294176 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s7cw6"] Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.295243 4675 scope.go:117] "RemoveContainer" containerID="79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.302409 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s7cw6"] Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.331039 4675 scope.go:117] "RemoveContainer" containerID="158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.362622 4675 scope.go:117] "RemoveContainer" containerID="8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8" Jan 24 07:18:25 crc kubenswrapper[4675]: E0124 07:18:25.363163 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8\": container with ID starting with 8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8 not found: ID does not exist" containerID="8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.363191 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8"} err="failed to get container status \"8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8\": rpc error: code = NotFound desc = could not find container \"8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8\": container with ID starting with 8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8 not found: ID does not exist" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.363210 4675 scope.go:117] "RemoveContainer" containerID="79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418" Jan 24 07:18:25 crc kubenswrapper[4675]: E0124 07:18:25.363484 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418\": container with ID starting with 79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418 not found: ID does not exist" containerID="79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.363503 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418"} err="failed to get container status \"79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418\": rpc error: code = NotFound desc = could not find container \"79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418\": container with ID starting with 79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418 not found: ID does not exist" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.363517 4675 scope.go:117] "RemoveContainer" containerID="158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866" Jan 24 07:18:25 crc kubenswrapper[4675]: E0124 07:18:25.363926 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866\": container with ID starting with 158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866 not found: ID does not exist" containerID="158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.363952 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866"} err="failed to get container status \"158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866\": rpc error: code = NotFound desc = could not find container \"158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866\": container with ID starting with 158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866 not found: ID does not exist" Jan 24 07:18:26 crc kubenswrapper[4675]: I0124 07:18:26.959188 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" path="/var/lib/kubelet/pods/0d407256-826f-449b-bc5d-c7ba87f55424/volumes" Jan 24 07:18:38 crc kubenswrapper[4675]: I0124 07:18:38.629972 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:18:38 crc kubenswrapper[4675]: I0124 07:18:38.630413 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:18:39 crc kubenswrapper[4675]: I0124 07:18:39.144339 4675 scope.go:117] "RemoveContainer" containerID="6e86539bbdd5da050dd7b36207c60522a769e1e7ac856b3f85e7b51da5db45a6" Jan 24 07:19:08 crc kubenswrapper[4675]: I0124 07:19:08.630158 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:19:08 crc kubenswrapper[4675]: I0124 07:19:08.630920 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:19:08 crc kubenswrapper[4675]: I0124 07:19:08.631001 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:19:08 crc kubenswrapper[4675]: I0124 07:19:08.632098 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:19:08 crc kubenswrapper[4675]: I0124 07:19:08.632206 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" gracePeriod=600 Jan 24 07:19:08 crc kubenswrapper[4675]: E0124 07:19:08.755293 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:19:09 crc kubenswrapper[4675]: I0124 07:19:09.760484 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" exitCode=0 Jan 24 07:19:09 crc kubenswrapper[4675]: I0124 07:19:09.760585 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38"} Jan 24 07:19:09 crc kubenswrapper[4675]: I0124 07:19:09.760850 4675 scope.go:117] "RemoveContainer" containerID="c57b46ad673cdfd63921bb6948675e30fb84216d961ca2e82415fb89b85b5df0" Jan 24 07:19:09 crc kubenswrapper[4675]: I0124 07:19:09.761333 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:19:09 crc kubenswrapper[4675]: E0124 07:19:09.761627 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:19:21 crc kubenswrapper[4675]: I0124 07:19:21.942849 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:19:21 crc kubenswrapper[4675]: E0124 07:19:21.944638 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:19:36 crc kubenswrapper[4675]: I0124 07:19:36.943499 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:19:36 crc kubenswrapper[4675]: E0124 07:19:36.944393 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:19:47 crc kubenswrapper[4675]: I0124 07:19:47.943328 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:19:47 crc kubenswrapper[4675]: E0124 07:19:47.944363 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:20:01 crc kubenswrapper[4675]: I0124 07:20:01.943122 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:20:01 crc kubenswrapper[4675]: E0124 07:20:01.944016 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:20:13 crc kubenswrapper[4675]: I0124 07:20:13.942965 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:20:13 crc kubenswrapper[4675]: E0124 07:20:13.943790 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:20:28 crc kubenswrapper[4675]: I0124 07:20:28.954273 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:20:28 crc kubenswrapper[4675]: E0124 07:20:28.955299 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:20:39 crc kubenswrapper[4675]: I0124 07:20:39.258795 4675 scope.go:117] "RemoveContainer" containerID="7f4de3b3644f7a4cb5893a806c2e209a553b8896d0ec64835b19a118ea983566" Jan 24 07:20:39 crc kubenswrapper[4675]: I0124 07:20:39.286985 4675 scope.go:117] "RemoveContainer" containerID="c1ed323221939791011d988310c5e1001dcc2cf9dcc422d083610000da9a42e7" Jan 24 07:20:42 crc kubenswrapper[4675]: I0124 07:20:42.943326 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:20:42 crc kubenswrapper[4675]: E0124 07:20:42.944281 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:20:53 crc kubenswrapper[4675]: I0124 07:20:53.943222 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:20:53 crc kubenswrapper[4675]: E0124 07:20:53.943914 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:21:05 crc kubenswrapper[4675]: I0124 07:21:05.943523 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:21:05 crc kubenswrapper[4675]: E0124 07:21:05.944365 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:21:08 crc kubenswrapper[4675]: I0124 07:21:08.050179 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gqpfm"] Jan 24 07:21:08 crc kubenswrapper[4675]: I0124 07:21:08.064091 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e5bb-account-create-update-r9xsl"] Jan 24 07:21:08 crc kubenswrapper[4675]: I0124 07:21:08.073551 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e5bb-account-create-update-r9xsl"] Jan 24 07:21:08 crc kubenswrapper[4675]: I0124 07:21:08.082088 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gqpfm"] Jan 24 07:21:08 crc kubenswrapper[4675]: I0124 07:21:08.985570 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147543ec-f687-430c-8a42-547c5861dbf4" path="/var/lib/kubelet/pods/147543ec-f687-430c-8a42-547c5861dbf4/volumes" Jan 24 07:21:08 crc kubenswrapper[4675]: I0124 07:21:08.986963 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade78eac-6799-49f4-b0ea-2f3dcb21273e" path="/var/lib/kubelet/pods/ade78eac-6799-49f4-b0ea-2f3dcb21273e/volumes" Jan 24 07:21:12 crc kubenswrapper[4675]: I0124 07:21:12.960309 4675 generic.go:334] "Generic (PLEG): container finished" podID="e9b8f08b-6ece-4b46-86c0-9c353d61c50c" containerID="ce27a8857b567fc180eec8926ea79a194563de9a40fccc96fae87fba64bf0d79" exitCode=0 Jan 24 07:21:12 crc kubenswrapper[4675]: I0124 07:21:12.960414 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" event={"ID":"e9b8f08b-6ece-4b46-86c0-9c353d61c50c","Type":"ContainerDied","Data":"ce27a8857b567fc180eec8926ea79a194563de9a40fccc96fae87fba64bf0d79"} Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.047124 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1ef3-account-create-update-txcmj"] Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.057421 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1ef3-account-create-update-txcmj"] Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.367476 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.491977 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-ssh-key-openstack-edpm-ipam\") pod \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.492087 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb7dl\" (UniqueName: \"kubernetes.io/projected/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-kube-api-access-nb7dl\") pod \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.492131 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-bootstrap-combined-ca-bundle\") pod \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.492292 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-inventory\") pod \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.498224 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e9b8f08b-6ece-4b46-86c0-9c353d61c50c" (UID: "e9b8f08b-6ece-4b46-86c0-9c353d61c50c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.499310 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-kube-api-access-nb7dl" (OuterVolumeSpecName: "kube-api-access-nb7dl") pod "e9b8f08b-6ece-4b46-86c0-9c353d61c50c" (UID: "e9b8f08b-6ece-4b46-86c0-9c353d61c50c"). InnerVolumeSpecName "kube-api-access-nb7dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.524248 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-inventory" (OuterVolumeSpecName: "inventory") pod "e9b8f08b-6ece-4b46-86c0-9c353d61c50c" (UID: "e9b8f08b-6ece-4b46-86c0-9c353d61c50c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.527134 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e9b8f08b-6ece-4b46-86c0-9c353d61c50c" (UID: "e9b8f08b-6ece-4b46-86c0-9c353d61c50c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.594832 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.594891 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.594906 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb7dl\" (UniqueName: \"kubernetes.io/projected/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-kube-api-access-nb7dl\") on node \"crc\" DevicePath \"\"" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.594917 4675 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.952002 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66b11fd-5bd9-4ba0-bd60-b370a709be63" path="/var/lib/kubelet/pods/f66b11fd-5bd9-4ba0-bd60-b370a709be63/volumes" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.982770 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" event={"ID":"e9b8f08b-6ece-4b46-86c0-9c353d61c50c","Type":"ContainerDied","Data":"57432d5c7d3510a9357d6e2e9e14bd4d88d3c826de5299149d5d3818512d7037"} Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.982864 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57432d5c7d3510a9357d6e2e9e14bd4d88d3c826de5299149d5d3818512d7037" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.982785 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.064397 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zh8n7"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.078903 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-s7r45"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.088815 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1e77-account-create-update-7b985"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.096975 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-s7r45"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.104598 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zh8n7"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.112732 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1e77-account-create-update-7b985"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.120942 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh"] Jan 24 07:21:15 crc kubenswrapper[4675]: E0124 07:21:15.121465 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="extract-utilities" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.121487 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="extract-utilities" Jan 24 07:21:15 crc kubenswrapper[4675]: E0124 07:21:15.121503 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b8f08b-6ece-4b46-86c0-9c353d61c50c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.121511 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b8f08b-6ece-4b46-86c0-9c353d61c50c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 24 07:21:15 crc kubenswrapper[4675]: E0124 07:21:15.121533 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="registry-server" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.121539 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="registry-server" Jan 24 07:21:15 crc kubenswrapper[4675]: E0124 07:21:15.121556 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="extract-content" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.121562 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="extract-content" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.121775 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b8f08b-6ece-4b46-86c0-9c353d61c50c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.121798 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="registry-server" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.122512 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.124878 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.124950 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.128135 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.128266 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.128685 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.315967 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.316038 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd62d\" (UniqueName: \"kubernetes.io/projected/09d123a4-63c4-4269-b4e1-12932baedfd0-kube-api-access-gd62d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.317316 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.419614 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.419698 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd62d\" (UniqueName: \"kubernetes.io/projected/09d123a4-63c4-4269-b4e1-12932baedfd0-kube-api-access-gd62d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.419795 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.425390 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.429354 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.443506 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd62d\" (UniqueName: \"kubernetes.io/projected/09d123a4-63c4-4269-b4e1-12932baedfd0-kube-api-access-gd62d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.741282 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:16 crc kubenswrapper[4675]: I0124 07:21:16.319078 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh"] Jan 24 07:21:16 crc kubenswrapper[4675]: I0124 07:21:16.322894 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:21:16 crc kubenswrapper[4675]: I0124 07:21:16.963231 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b2533f-cb15-4581-84c1-81235b34bfe5" path="/var/lib/kubelet/pods/33b2533f-cb15-4581-84c1-81235b34bfe5/volumes" Jan 24 07:21:16 crc kubenswrapper[4675]: I0124 07:21:16.964746 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45f87016-197d-4a38-94d7-4c7828af8ee3" path="/var/lib/kubelet/pods/45f87016-197d-4a38-94d7-4c7828af8ee3/volumes" Jan 24 07:21:16 crc kubenswrapper[4675]: I0124 07:21:16.965355 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b33dcb3-da61-44f3-9666-2b4afb90b9cd" path="/var/lib/kubelet/pods/5b33dcb3-da61-44f3-9666-2b4afb90b9cd/volumes" Jan 24 07:21:17 crc kubenswrapper[4675]: I0124 07:21:17.004461 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" event={"ID":"09d123a4-63c4-4269-b4e1-12932baedfd0","Type":"ContainerStarted","Data":"78b7fe3eb41435eafa123bc3a0e51de4500cea6ee1cf2c6b836b190ca84df194"} Jan 24 07:21:18 crc kubenswrapper[4675]: I0124 07:21:18.016080 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" event={"ID":"09d123a4-63c4-4269-b4e1-12932baedfd0","Type":"ContainerStarted","Data":"4e774ccd4a0c45f76240627c3ed1c01aa5657474b302cffa11c10c2b8e04e982"} Jan 24 07:21:18 crc kubenswrapper[4675]: I0124 07:21:18.033305 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" podStartSLOduration=2.531955308 podStartE2EDuration="3.033288279s" podCreationTimestamp="2026-01-24 07:21:15 +0000 UTC" firstStartedPulling="2026-01-24 07:21:16.322692906 +0000 UTC m=+1677.618798119" lastFinishedPulling="2026-01-24 07:21:16.824025867 +0000 UTC m=+1678.120131090" observedRunningTime="2026-01-24 07:21:18.029920728 +0000 UTC m=+1679.326025961" watchObservedRunningTime="2026-01-24 07:21:18.033288279 +0000 UTC m=+1679.329393512" Jan 24 07:21:19 crc kubenswrapper[4675]: I0124 07:21:19.942678 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:21:19 crc kubenswrapper[4675]: E0124 07:21:19.943155 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:21:33 crc kubenswrapper[4675]: I0124 07:21:33.942583 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:21:33 crc kubenswrapper[4675]: E0124 07:21:33.943290 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:21:35 crc kubenswrapper[4675]: I0124 07:21:35.036846 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8k7rv"] Jan 24 07:21:35 crc kubenswrapper[4675]: I0124 07:21:35.045715 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8k7rv"] Jan 24 07:21:36 crc kubenswrapper[4675]: I0124 07:21:36.959787 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c46b58-28e2-4896-8ae5-dc53cbe96ec9" path="/var/lib/kubelet/pods/38c46b58-28e2-4896-8ae5-dc53cbe96ec9/volumes" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.363877 4675 scope.go:117] "RemoveContainer" containerID="9e2bdeffa8a165fe95edf61087a0f3f330b590c20cac1b5409ef71c4c21879df" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.418804 4675 scope.go:117] "RemoveContainer" containerID="2410d88c73d46b104c7a96605edcd69c1a2ae6d7410fac2b2340c43785d9bc0e" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.447257 4675 scope.go:117] "RemoveContainer" containerID="1a52396d2314002bfe722f95ecb36d5eaf563c649851e4153e001371ff49687c" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.497627 4675 scope.go:117] "RemoveContainer" containerID="cb5f5de19b4ad05d5cb260b67a7ffda59880a5be1b09d0c5d743d36c1be22ba3" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.529525 4675 scope.go:117] "RemoveContainer" containerID="f1cbd2804e3c921d0862ddd3c3e25da9a0eb08d8f218d2fcc9340af63efc5b69" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.568257 4675 scope.go:117] "RemoveContainer" containerID="76e5054851b4909dbeb1cd4deac1c991823b0d9876d867ee5c156baf2fa53d30" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.611501 4675 scope.go:117] "RemoveContainer" containerID="974d7fcdae70428bd478be3b3521612bb5892f56ced9fe76c6f84ebdcecc2fc2" Jan 24 07:21:44 crc kubenswrapper[4675]: I0124 07:21:44.942843 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:21:44 crc kubenswrapper[4675]: E0124 07:21:44.943679 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.081957 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-391e-account-create-update-r55gs"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.091491 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-391e-account-create-update-r55gs"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.104255 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-ffb8-account-create-update-2lngf"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.115289 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-6lfkb"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.125736 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4de6-account-create-update-vzw5r"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.138579 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bbqrz"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.148735 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-6lfkb"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.157123 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bbqrz"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.164444 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-ffb8-account-create-update-2lngf"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.173009 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4de6-account-create-update-vzw5r"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.182356 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-5zwrb"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.190850 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-5zwrb"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.952264 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0a3027-2e26-4258-aaee-a5f0df76fe34" path="/var/lib/kubelet/pods/5e0a3027-2e26-4258-aaee-a5f0df76fe34/volumes" Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.953366 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" path="/var/lib/kubelet/pods/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9/volumes" Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.954349 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" path="/var/lib/kubelet/pods/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f/volumes" Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.955132 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba347982-6836-4e3f-80c3-ef28ffc5e5cc" path="/var/lib/kubelet/pods/ba347982-6836-4e3f-80c3-ef28ffc5e5cc/volumes" Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.956513 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cce44ec9-1ffb-44d7-bcce-250a1fdf6959" path="/var/lib/kubelet/pods/cce44ec9-1ffb-44d7-bcce-250a1fdf6959/volumes" Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.957290 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f802e166-b89b-4e38-9230-762edc86b32c" path="/var/lib/kubelet/pods/f802e166-b89b-4e38-9230-762edc86b32c/volumes" Jan 24 07:21:53 crc kubenswrapper[4675]: I0124 07:21:53.048927 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ttgww"] Jan 24 07:21:53 crc kubenswrapper[4675]: I0124 07:21:53.060503 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ttgww"] Jan 24 07:21:54 crc kubenswrapper[4675]: I0124 07:21:54.952292 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c949a736-b46d-4907-a24d-17f28f4e3f71" path="/var/lib/kubelet/pods/c949a736-b46d-4907-a24d-17f28f4e3f71/volumes" Jan 24 07:21:59 crc kubenswrapper[4675]: I0124 07:21:59.943169 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:21:59 crc kubenswrapper[4675]: E0124 07:21:59.943981 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:22:11 crc kubenswrapper[4675]: I0124 07:22:11.034335 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-95xkb"] Jan 24 07:22:11 crc kubenswrapper[4675]: I0124 07:22:11.043202 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-95xkb"] Jan 24 07:22:12 crc kubenswrapper[4675]: I0124 07:22:12.960050 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e53c5a1-6293-46d9-9783-e7d183050152" path="/var/lib/kubelet/pods/7e53c5a1-6293-46d9-9783-e7d183050152/volumes" Jan 24 07:22:13 crc kubenswrapper[4675]: I0124 07:22:13.944139 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:22:13 crc kubenswrapper[4675]: E0124 07:22:13.945494 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:22:27 crc kubenswrapper[4675]: I0124 07:22:27.943236 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:22:27 crc kubenswrapper[4675]: E0124 07:22:27.945152 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:22:33 crc kubenswrapper[4675]: I0124 07:22:33.037504 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4hsxg"] Jan 24 07:22:33 crc kubenswrapper[4675]: I0124 07:22:33.046644 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4hsxg"] Jan 24 07:22:34 crc kubenswrapper[4675]: I0124 07:22:34.957150 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="871f5758-f078-4271-acb9-e5ca8bfdc2eb" path="/var/lib/kubelet/pods/871f5758-f078-4271-acb9-e5ca8bfdc2eb/volumes" Jan 24 07:22:39 crc kubenswrapper[4675]: I0124 07:22:39.749546 4675 scope.go:117] "RemoveContainer" containerID="1cf02099876733db0045ce49593ffbde19db42e4c0d54b5221192666290a2ec9" Jan 24 07:22:39 crc kubenswrapper[4675]: I0124 07:22:39.808355 4675 scope.go:117] "RemoveContainer" containerID="face7e5c0b8054d6c99e86c42a7c3b558ca54c06b16b7b249ea8d2239d88036b" Jan 24 07:22:39 crc kubenswrapper[4675]: I0124 07:22:39.840652 4675 scope.go:117] "RemoveContainer" containerID="bdb786fea2e5d2877731346b3f673262878acb6d16f62d6d292e1a2d801ca4e0" Jan 24 07:22:39 crc kubenswrapper[4675]: I0124 07:22:39.882882 4675 scope.go:117] "RemoveContainer" containerID="fdb88fe5e8d5c3d574f7618a944551c7b762f983498c9ea3e4b037a53bfad902" Jan 24 07:22:39 crc kubenswrapper[4675]: I0124 07:22:39.941826 4675 scope.go:117] "RemoveContainer" containerID="c2b0d0fa45b902eb0ffa086ad50d248f34796e32c1a20209565126bead4f77e0" Jan 24 07:22:39 crc kubenswrapper[4675]: I0124 07:22:39.980051 4675 scope.go:117] "RemoveContainer" containerID="2617af6172b0f231078c0676a80fde395fe2ef1163c9fa0791bb89294c2f806c" Jan 24 07:22:40 crc kubenswrapper[4675]: I0124 07:22:40.040246 4675 scope.go:117] "RemoveContainer" containerID="03ea54e271ba027af9b3efb600eaf2f980bfc8dab89a53460b77cbbc8373517e" Jan 24 07:22:40 crc kubenswrapper[4675]: I0124 07:22:40.063669 4675 scope.go:117] "RemoveContainer" containerID="f2b78394bf1beb82b28dc55cf3863a1ec788f53b7447575aebefd50f08d7bb67" Jan 24 07:22:40 crc kubenswrapper[4675]: I0124 07:22:40.103903 4675 scope.go:117] "RemoveContainer" containerID="6d211dc6ddf9ea6d7e3e8e95b729de63c53d51d2eead6595b62cad41e16dadc4" Jan 24 07:22:42 crc kubenswrapper[4675]: I0124 07:22:42.942183 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:22:42 crc kubenswrapper[4675]: E0124 07:22:42.942989 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:22:48 crc kubenswrapper[4675]: I0124 07:22:48.060714 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-fp9qw"] Jan 24 07:22:48 crc kubenswrapper[4675]: I0124 07:22:48.067568 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-fp9qw"] Jan 24 07:22:48 crc kubenswrapper[4675]: I0124 07:22:48.075208 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-v7kb4"] Jan 24 07:22:48 crc kubenswrapper[4675]: I0124 07:22:48.083060 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-v7kb4"] Jan 24 07:22:48 crc kubenswrapper[4675]: I0124 07:22:48.963348 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" path="/var/lib/kubelet/pods/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01/volumes" Jan 24 07:22:48 crc kubenswrapper[4675]: I0124 07:22:48.965474 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f54df341-915c-4505-bd2e-81923b07a2be" path="/var/lib/kubelet/pods/f54df341-915c-4505-bd2e-81923b07a2be/volumes" Jan 24 07:22:56 crc kubenswrapper[4675]: I0124 07:22:56.943674 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:22:56 crc kubenswrapper[4675]: E0124 07:22:56.944332 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.458963 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-znd7g"] Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.460686 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.479136 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-znd7g"] Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.552135 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-catalog-content\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.552232 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-utilities\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.552337 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh2ds\" (UniqueName: \"kubernetes.io/projected/744470af-3cf4-4f93-8269-4e579adc0101-kube-api-access-dh2ds\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.654371 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh2ds\" (UniqueName: \"kubernetes.io/projected/744470af-3cf4-4f93-8269-4e579adc0101-kube-api-access-dh2ds\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.654754 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-catalog-content\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.654921 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-utilities\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.655279 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-catalog-content\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.655456 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-utilities\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.672567 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh2ds\" (UniqueName: \"kubernetes.io/projected/744470af-3cf4-4f93-8269-4e579adc0101-kube-api-access-dh2ds\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.784849 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:58 crc kubenswrapper[4675]: I0124 07:22:58.238057 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-znd7g"] Jan 24 07:22:59 crc kubenswrapper[4675]: I0124 07:22:59.274130 4675 generic.go:334] "Generic (PLEG): container finished" podID="744470af-3cf4-4f93-8269-4e579adc0101" containerID="a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765" exitCode=0 Jan 24 07:22:59 crc kubenswrapper[4675]: I0124 07:22:59.274256 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerDied","Data":"a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765"} Jan 24 07:22:59 crc kubenswrapper[4675]: I0124 07:22:59.274395 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerStarted","Data":"bd0b05f223be494afcf36db2c1abefbea4c81384df296c831af9fc3ee3f29310"} Jan 24 07:23:00 crc kubenswrapper[4675]: I0124 07:23:00.284478 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerStarted","Data":"e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1"} Jan 24 07:23:03 crc kubenswrapper[4675]: I0124 07:23:03.025049 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-g8f6m"] Jan 24 07:23:03 crc kubenswrapper[4675]: I0124 07:23:03.034564 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-58bxq"] Jan 24 07:23:03 crc kubenswrapper[4675]: I0124 07:23:03.044994 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-g8f6m"] Jan 24 07:23:03 crc kubenswrapper[4675]: I0124 07:23:03.054709 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-58bxq"] Jan 24 07:23:04 crc kubenswrapper[4675]: I0124 07:23:04.955226 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" path="/var/lib/kubelet/pods/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7/volumes" Jan 24 07:23:04 crc kubenswrapper[4675]: I0124 07:23:04.958671 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57270c73-9e5a-4629-8c7a-85123438a067" path="/var/lib/kubelet/pods/57270c73-9e5a-4629-8c7a-85123438a067/volumes" Jan 24 07:23:05 crc kubenswrapper[4675]: I0124 07:23:05.348618 4675 generic.go:334] "Generic (PLEG): container finished" podID="744470af-3cf4-4f93-8269-4e579adc0101" containerID="e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1" exitCode=0 Jan 24 07:23:05 crc kubenswrapper[4675]: I0124 07:23:05.348686 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerDied","Data":"e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1"} Jan 24 07:23:06 crc kubenswrapper[4675]: I0124 07:23:06.360762 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerStarted","Data":"164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee"} Jan 24 07:23:06 crc kubenswrapper[4675]: I0124 07:23:06.388657 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-znd7g" podStartSLOduration=2.549650915 podStartE2EDuration="9.388633103s" podCreationTimestamp="2026-01-24 07:22:57 +0000 UTC" firstStartedPulling="2026-01-24 07:22:59.279063871 +0000 UTC m=+1780.575169094" lastFinishedPulling="2026-01-24 07:23:06.118046059 +0000 UTC m=+1787.414151282" observedRunningTime="2026-01-24 07:23:06.381117881 +0000 UTC m=+1787.677223114" watchObservedRunningTime="2026-01-24 07:23:06.388633103 +0000 UTC m=+1787.684738346" Jan 24 07:23:07 crc kubenswrapper[4675]: I0124 07:23:07.789841 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:23:07 crc kubenswrapper[4675]: I0124 07:23:07.790245 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:23:08 crc kubenswrapper[4675]: I0124 07:23:08.835748 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-znd7g" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="registry-server" probeResult="failure" output=< Jan 24 07:23:08 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:23:08 crc kubenswrapper[4675]: > Jan 24 07:23:09 crc kubenswrapper[4675]: I0124 07:23:09.943646 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:23:09 crc kubenswrapper[4675]: E0124 07:23:09.944208 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:23:17 crc kubenswrapper[4675]: I0124 07:23:17.842644 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:23:17 crc kubenswrapper[4675]: I0124 07:23:17.900202 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:23:18 crc kubenswrapper[4675]: I0124 07:23:18.092474 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-znd7g"] Jan 24 07:23:19 crc kubenswrapper[4675]: I0124 07:23:19.474222 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-znd7g" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="registry-server" containerID="cri-o://164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee" gracePeriod=2 Jan 24 07:23:19 crc kubenswrapper[4675]: I0124 07:23:19.952196 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.154268 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-utilities\") pod \"744470af-3cf4-4f93-8269-4e579adc0101\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.154539 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-catalog-content\") pod \"744470af-3cf4-4f93-8269-4e579adc0101\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.154742 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh2ds\" (UniqueName: \"kubernetes.io/projected/744470af-3cf4-4f93-8269-4e579adc0101-kube-api-access-dh2ds\") pod \"744470af-3cf4-4f93-8269-4e579adc0101\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.155103 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-utilities" (OuterVolumeSpecName: "utilities") pod "744470af-3cf4-4f93-8269-4e579adc0101" (UID: "744470af-3cf4-4f93-8269-4e579adc0101"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.155333 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.165354 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744470af-3cf4-4f93-8269-4e579adc0101-kube-api-access-dh2ds" (OuterVolumeSpecName: "kube-api-access-dh2ds") pod "744470af-3cf4-4f93-8269-4e579adc0101" (UID: "744470af-3cf4-4f93-8269-4e579adc0101"). InnerVolumeSpecName "kube-api-access-dh2ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.257124 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh2ds\" (UniqueName: \"kubernetes.io/projected/744470af-3cf4-4f93-8269-4e579adc0101-kube-api-access-dh2ds\") on node \"crc\" DevicePath \"\"" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.331174 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "744470af-3cf4-4f93-8269-4e579adc0101" (UID: "744470af-3cf4-4f93-8269-4e579adc0101"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.358663 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.486281 4675 generic.go:334] "Generic (PLEG): container finished" podID="744470af-3cf4-4f93-8269-4e579adc0101" containerID="164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee" exitCode=0 Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.486382 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.486385 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerDied","Data":"164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee"} Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.488315 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerDied","Data":"bd0b05f223be494afcf36db2c1abefbea4c81384df296c831af9fc3ee3f29310"} Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.488341 4675 scope.go:117] "RemoveContainer" containerID="164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.510697 4675 scope.go:117] "RemoveContainer" containerID="e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.548313 4675 scope.go:117] "RemoveContainer" containerID="a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.586162 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-znd7g"] Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.596065 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-znd7g"] Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.606976 4675 scope.go:117] "RemoveContainer" containerID="164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee" Jan 24 07:23:20 crc kubenswrapper[4675]: E0124 07:23:20.607873 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee\": container with ID starting with 164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee not found: ID does not exist" containerID="164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.608000 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee"} err="failed to get container status \"164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee\": rpc error: code = NotFound desc = could not find container \"164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee\": container with ID starting with 164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee not found: ID does not exist" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.608119 4675 scope.go:117] "RemoveContainer" containerID="e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1" Jan 24 07:23:20 crc kubenswrapper[4675]: E0124 07:23:20.608795 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1\": container with ID starting with e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1 not found: ID does not exist" containerID="e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.608863 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1"} err="failed to get container status \"e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1\": rpc error: code = NotFound desc = could not find container \"e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1\": container with ID starting with e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1 not found: ID does not exist" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.608881 4675 scope.go:117] "RemoveContainer" containerID="a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765" Jan 24 07:23:20 crc kubenswrapper[4675]: E0124 07:23:20.609402 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765\": container with ID starting with a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765 not found: ID does not exist" containerID="a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.609493 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765"} err="failed to get container status \"a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765\": rpc error: code = NotFound desc = could not find container \"a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765\": container with ID starting with a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765 not found: ID does not exist" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.943318 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:23:20 crc kubenswrapper[4675]: E0124 07:23:20.943854 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.961204 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="744470af-3cf4-4f93-8269-4e579adc0101" path="/var/lib/kubelet/pods/744470af-3cf4-4f93-8269-4e579adc0101/volumes" Jan 24 07:23:31 crc kubenswrapper[4675]: I0124 07:23:31.944132 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:23:31 crc kubenswrapper[4675]: E0124 07:23:31.945458 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:23:34 crc kubenswrapper[4675]: I0124 07:23:34.638228 4675 generic.go:334] "Generic (PLEG): container finished" podID="09d123a4-63c4-4269-b4e1-12932baedfd0" containerID="4e774ccd4a0c45f76240627c3ed1c01aa5657474b302cffa11c10c2b8e04e982" exitCode=0 Jan 24 07:23:34 crc kubenswrapper[4675]: I0124 07:23:34.638300 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" event={"ID":"09d123a4-63c4-4269-b4e1-12932baedfd0","Type":"ContainerDied","Data":"4e774ccd4a0c45f76240627c3ed1c01aa5657474b302cffa11c10c2b8e04e982"} Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.092748 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.139680 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-inventory\") pod \"09d123a4-63c4-4269-b4e1-12932baedfd0\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.139848 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd62d\" (UniqueName: \"kubernetes.io/projected/09d123a4-63c4-4269-b4e1-12932baedfd0-kube-api-access-gd62d\") pod \"09d123a4-63c4-4269-b4e1-12932baedfd0\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.139987 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-ssh-key-openstack-edpm-ipam\") pod \"09d123a4-63c4-4269-b4e1-12932baedfd0\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.144633 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d123a4-63c4-4269-b4e1-12932baedfd0-kube-api-access-gd62d" (OuterVolumeSpecName: "kube-api-access-gd62d") pod "09d123a4-63c4-4269-b4e1-12932baedfd0" (UID: "09d123a4-63c4-4269-b4e1-12932baedfd0"). InnerVolumeSpecName "kube-api-access-gd62d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.168315 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "09d123a4-63c4-4269-b4e1-12932baedfd0" (UID: "09d123a4-63c4-4269-b4e1-12932baedfd0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.174577 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-inventory" (OuterVolumeSpecName: "inventory") pod "09d123a4-63c4-4269-b4e1-12932baedfd0" (UID: "09d123a4-63c4-4269-b4e1-12932baedfd0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.242814 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.242861 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd62d\" (UniqueName: \"kubernetes.io/projected/09d123a4-63c4-4269-b4e1-12932baedfd0-kube-api-access-gd62d\") on node \"crc\" DevicePath \"\"" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.242877 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.667022 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" event={"ID":"09d123a4-63c4-4269-b4e1-12932baedfd0","Type":"ContainerDied","Data":"78b7fe3eb41435eafa123bc3a0e51de4500cea6ee1cf2c6b836b190ca84df194"} Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.667069 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78b7fe3eb41435eafa123bc3a0e51de4500cea6ee1cf2c6b836b190ca84df194" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.667132 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753044 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879"] Jan 24 07:23:36 crc kubenswrapper[4675]: E0124 07:23:36.753476 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="registry-server" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753498 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="registry-server" Jan 24 07:23:36 crc kubenswrapper[4675]: E0124 07:23:36.753510 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="extract-content" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753517 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="extract-content" Jan 24 07:23:36 crc kubenswrapper[4675]: E0124 07:23:36.753543 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="extract-utilities" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753549 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="extract-utilities" Jan 24 07:23:36 crc kubenswrapper[4675]: E0124 07:23:36.753559 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d123a4-63c4-4269-b4e1-12932baedfd0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753566 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d123a4-63c4-4269-b4e1-12932baedfd0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753741 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d123a4-63c4-4269-b4e1-12932baedfd0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753767 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="registry-server" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.754476 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.756808 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.757046 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.757231 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.758825 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.782265 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879"] Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.853099 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.853202 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfxrf\" (UniqueName: \"kubernetes.io/projected/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-kube-api-access-hfxrf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.853653 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.956247 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfxrf\" (UniqueName: \"kubernetes.io/projected/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-kube-api-access-hfxrf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.956555 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.956642 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.960064 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.965427 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.972916 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfxrf\" (UniqueName: \"kubernetes.io/projected/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-kube-api-access-hfxrf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:37 crc kubenswrapper[4675]: I0124 07:23:37.076397 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:37 crc kubenswrapper[4675]: I0124 07:23:37.583539 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879"] Jan 24 07:23:37 crc kubenswrapper[4675]: I0124 07:23:37.675151 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" event={"ID":"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3","Type":"ContainerStarted","Data":"3b993624ad3fbf948909e7968e38338ba99068f586554ea9b4c566880a979021"} Jan 24 07:23:38 crc kubenswrapper[4675]: I0124 07:23:38.686017 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" event={"ID":"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3","Type":"ContainerStarted","Data":"493e076588b3025598ca6ab35ffda470c220488fd356f1398e842589774ea9b6"} Jan 24 07:23:38 crc kubenswrapper[4675]: I0124 07:23:38.704701 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" podStartSLOduration=2.098655649 podStartE2EDuration="2.704682615s" podCreationTimestamp="2026-01-24 07:23:36 +0000 UTC" firstStartedPulling="2026-01-24 07:23:37.59160663 +0000 UTC m=+1818.887711843" lastFinishedPulling="2026-01-24 07:23:38.197633586 +0000 UTC m=+1819.493738809" observedRunningTime="2026-01-24 07:23:38.699902059 +0000 UTC m=+1819.996007282" watchObservedRunningTime="2026-01-24 07:23:38.704682615 +0000 UTC m=+1820.000787838" Jan 24 07:23:40 crc kubenswrapper[4675]: I0124 07:23:40.279610 4675 scope.go:117] "RemoveContainer" containerID="d2cd62045ebf2fa7b15faa8a57eb1e83b1434d06978bf2c230d6fd80499404d5" Jan 24 07:23:40 crc kubenswrapper[4675]: I0124 07:23:40.320168 4675 scope.go:117] "RemoveContainer" containerID="ec7473e1089d8da929e61e3782b155d95dfe82c94964d44704255a4214eea76c" Jan 24 07:23:40 crc kubenswrapper[4675]: I0124 07:23:40.386253 4675 scope.go:117] "RemoveContainer" containerID="5fd1de2ade476875bcd3cadbec86fb8450feb391c53de19fcd301aa7061837a8" Jan 24 07:23:40 crc kubenswrapper[4675]: I0124 07:23:40.432799 4675 scope.go:117] "RemoveContainer" containerID="eb86a86e2ea4aab1599d35163fef6b9016931250bfcb0fdf136d0350b3794d53" Jan 24 07:23:42 crc kubenswrapper[4675]: I0124 07:23:42.942147 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:23:42 crc kubenswrapper[4675]: E0124 07:23:42.942770 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:23:48 crc kubenswrapper[4675]: I0124 07:23:48.046624 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-p847h"] Jan 24 07:23:48 crc kubenswrapper[4675]: I0124 07:23:48.054355 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-p847h"] Jan 24 07:23:48 crc kubenswrapper[4675]: I0124 07:23:48.957364 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d4a29e-dbe1-4145-b0af-afa0c77172b9" path="/var/lib/kubelet/pods/d4d4a29e-dbe1-4145-b0af-afa0c77172b9/volumes" Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.037173 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2gcsv"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.050490 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-47cc-account-create-update-qbjjs"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.060604 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1e3e-account-create-update-z84p9"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.074332 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-aab6-account-create-update-4zgt4"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.082385 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-4z8kz"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.090510 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2gcsv"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.099864 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1e3e-account-create-update-z84p9"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.108703 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-4z8kz"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.116198 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-47cc-account-create-update-qbjjs"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.125331 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-aab6-account-create-update-4zgt4"] Jan 24 07:23:50 crc kubenswrapper[4675]: I0124 07:23:50.955735 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b6ffe68-4ebd-47e8-8b11-20050394e5b7" path="/var/lib/kubelet/pods/9b6ffe68-4ebd-47e8-8b11-20050394e5b7/volumes" Jan 24 07:23:50 crc kubenswrapper[4675]: I0124 07:23:50.957013 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c962c5e1-a244-4690-935e-9a7b0d5fc7e4" path="/var/lib/kubelet/pods/c962c5e1-a244-4690-935e-9a7b0d5fc7e4/volumes" Jan 24 07:23:50 crc kubenswrapper[4675]: I0124 07:23:50.957783 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb102798-6f2c-4cf4-b697-03cc94f9174a" path="/var/lib/kubelet/pods/cb102798-6f2c-4cf4-b697-03cc94f9174a/volumes" Jan 24 07:23:50 crc kubenswrapper[4675]: I0124 07:23:50.958569 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db48a3bd-546d-4f52-a9bc-340e03790730" path="/var/lib/kubelet/pods/db48a3bd-546d-4f52-a9bc-340e03790730/volumes" Jan 24 07:23:50 crc kubenswrapper[4675]: I0124 07:23:50.959988 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8458b8a-6770-4e62-9848-55a9b142cb8c" path="/var/lib/kubelet/pods/f8458b8a-6770-4e62-9848-55a9b142cb8c/volumes" Jan 24 07:23:55 crc kubenswrapper[4675]: I0124 07:23:55.942454 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:23:55 crc kubenswrapper[4675]: E0124 07:23:55.943422 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:24:07 crc kubenswrapper[4675]: I0124 07:24:07.943464 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:24:07 crc kubenswrapper[4675]: E0124 07:24:07.944400 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:24:18 crc kubenswrapper[4675]: I0124 07:24:18.959001 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:24:20 crc kubenswrapper[4675]: I0124 07:24:20.040386 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvg8g"] Jan 24 07:24:20 crc kubenswrapper[4675]: I0124 07:24:20.048060 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvg8g"] Jan 24 07:24:20 crc kubenswrapper[4675]: I0124 07:24:20.083595 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"a1c9273bc1d397c7b2b3e725108610e2ba92ba82858b4b6dca89da8ddff34bf2"} Jan 24 07:24:20 crc kubenswrapper[4675]: I0124 07:24:20.953779 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="827f33c6-ea9f-4312-9533-e952a218f464" path="/var/lib/kubelet/pods/827f33c6-ea9f-4312-9533-e952a218f464/volumes" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.565458 4675 scope.go:117] "RemoveContainer" containerID="692b01412cca7a95c030d0da68618054df44eaf3b20646d9b4064c305a011eb1" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.609259 4675 scope.go:117] "RemoveContainer" containerID="92a6b4b87b9b2ef26a79f73c81ecbeb36fe6ccb8b0e511ab2d00e52dda5c10ce" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.631252 4675 scope.go:117] "RemoveContainer" containerID="ccdf210ec59856c481255445ba67000d722f7daf10b18be409b9884e5bed261a" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.667696 4675 scope.go:117] "RemoveContainer" containerID="e024578d84cf52e29f779949e2955f4eac1d56a123af391ad810ea1674a31648" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.708480 4675 scope.go:117] "RemoveContainer" containerID="7ee7b6faa999fda3d6ec97508bbaca0406687b589e89517642e51b8d024a1a97" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.777322 4675 scope.go:117] "RemoveContainer" containerID="13922ccdb386ccdef5ac3f7ca81cf15c2217528fedc1c377893db26450c6489d" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.815541 4675 scope.go:117] "RemoveContainer" containerID="dbf84230f864bc19464817aff5d36347fe8a661c6ce91661b718a8bbd234e6b5" Jan 24 07:24:45 crc kubenswrapper[4675]: I0124 07:24:45.043953 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-5bzdh"] Jan 24 07:24:45 crc kubenswrapper[4675]: I0124 07:24:45.055481 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-5bzdh"] Jan 24 07:24:46 crc kubenswrapper[4675]: I0124 07:24:46.045500 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2bwz"] Jan 24 07:24:46 crc kubenswrapper[4675]: I0124 07:24:46.055044 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2bwz"] Jan 24 07:24:46 crc kubenswrapper[4675]: I0124 07:24:46.958997 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" path="/var/lib/kubelet/pods/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284/volumes" Jan 24 07:24:46 crc kubenswrapper[4675]: I0124 07:24:46.960318 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1819bfe-22cc-4ead-8e81-717ee70b2e83" path="/var/lib/kubelet/pods/a1819bfe-22cc-4ead-8e81-717ee70b2e83/volumes" Jan 24 07:25:07 crc kubenswrapper[4675]: I0124 07:25:07.501371 4675 generic.go:334] "Generic (PLEG): container finished" podID="bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" containerID="493e076588b3025598ca6ab35ffda470c220488fd356f1398e842589774ea9b6" exitCode=0 Jan 24 07:25:07 crc kubenswrapper[4675]: I0124 07:25:07.501467 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" event={"ID":"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3","Type":"ContainerDied","Data":"493e076588b3025598ca6ab35ffda470c220488fd356f1398e842589774ea9b6"} Jan 24 07:25:08 crc kubenswrapper[4675]: I0124 07:25:08.886413 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:25:08 crc kubenswrapper[4675]: I0124 07:25:08.968269 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-ssh-key-openstack-edpm-ipam\") pod \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " Jan 24 07:25:08 crc kubenswrapper[4675]: I0124 07:25:08.968383 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-inventory\") pod \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " Jan 24 07:25:08 crc kubenswrapper[4675]: I0124 07:25:08.968479 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfxrf\" (UniqueName: \"kubernetes.io/projected/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-kube-api-access-hfxrf\") pod \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " Jan 24 07:25:08 crc kubenswrapper[4675]: I0124 07:25:08.974617 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-kube-api-access-hfxrf" (OuterVolumeSpecName: "kube-api-access-hfxrf") pod "bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" (UID: "bc52fac9-92d8-4555-b942-5f0dcb4bf6f3"). InnerVolumeSpecName "kube-api-access-hfxrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:25:08 crc kubenswrapper[4675]: I0124 07:25:08.995207 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-inventory" (OuterVolumeSpecName: "inventory") pod "bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" (UID: "bc52fac9-92d8-4555-b942-5f0dcb4bf6f3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.002232 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" (UID: "bc52fac9-92d8-4555-b942-5f0dcb4bf6f3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.070686 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfxrf\" (UniqueName: \"kubernetes.io/projected/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-kube-api-access-hfxrf\") on node \"crc\" DevicePath \"\"" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.070714 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.070735 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.526980 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" event={"ID":"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3","Type":"ContainerDied","Data":"3b993624ad3fbf948909e7968e38338ba99068f586554ea9b4c566880a979021"} Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.527039 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b993624ad3fbf948909e7968e38338ba99068f586554ea9b4c566880a979021" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.527152 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.661670 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc"] Jan 24 07:25:09 crc kubenswrapper[4675]: E0124 07:25:09.663345 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.663394 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.663619 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.664774 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.673005 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.673064 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.673191 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.673370 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.697399 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc"] Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.705413 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wbmh\" (UniqueName: \"kubernetes.io/projected/e9c128cc-910c-4ef2-9b56-14adf4d264b3-kube-api-access-5wbmh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.705504 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.705576 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.806997 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.807342 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wbmh\" (UniqueName: \"kubernetes.io/projected/e9c128cc-910c-4ef2-9b56-14adf4d264b3-kube-api-access-5wbmh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.807476 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.811348 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.811743 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.822280 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wbmh\" (UniqueName: \"kubernetes.io/projected/e9c128cc-910c-4ef2-9b56-14adf4d264b3-kube-api-access-5wbmh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.995138 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:10 crc kubenswrapper[4675]: I0124 07:25:10.563165 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc"] Jan 24 07:25:10 crc kubenswrapper[4675]: W0124 07:25:10.573247 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9c128cc_910c_4ef2_9b56_14adf4d264b3.slice/crio-0bf206a882066b29f1df3738d0e01d0d4bc1dc6db08d64d153e7b882e34c9d05 WatchSource:0}: Error finding container 0bf206a882066b29f1df3738d0e01d0d4bc1dc6db08d64d153e7b882e34c9d05: Status 404 returned error can't find the container with id 0bf206a882066b29f1df3738d0e01d0d4bc1dc6db08d64d153e7b882e34c9d05 Jan 24 07:25:11 crc kubenswrapper[4675]: I0124 07:25:11.549510 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" event={"ID":"e9c128cc-910c-4ef2-9b56-14adf4d264b3","Type":"ContainerStarted","Data":"f76dc390b199552873d6fce988d31a984953181f0421729e74dbff18258c271b"} Jan 24 07:25:11 crc kubenswrapper[4675]: I0124 07:25:11.550169 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" event={"ID":"e9c128cc-910c-4ef2-9b56-14adf4d264b3","Type":"ContainerStarted","Data":"0bf206a882066b29f1df3738d0e01d0d4bc1dc6db08d64d153e7b882e34c9d05"} Jan 24 07:25:11 crc kubenswrapper[4675]: I0124 07:25:11.574750 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" podStartSLOduration=1.981732667 podStartE2EDuration="2.574702781s" podCreationTimestamp="2026-01-24 07:25:09 +0000 UTC" firstStartedPulling="2026-01-24 07:25:10.57628705 +0000 UTC m=+1911.872392273" lastFinishedPulling="2026-01-24 07:25:11.169257164 +0000 UTC m=+1912.465362387" observedRunningTime="2026-01-24 07:25:11.568360556 +0000 UTC m=+1912.864465789" watchObservedRunningTime="2026-01-24 07:25:11.574702781 +0000 UTC m=+1912.870808024" Jan 24 07:25:18 crc kubenswrapper[4675]: I0124 07:25:18.606561 4675 generic.go:334] "Generic (PLEG): container finished" podID="e9c128cc-910c-4ef2-9b56-14adf4d264b3" containerID="f76dc390b199552873d6fce988d31a984953181f0421729e74dbff18258c271b" exitCode=0 Jan 24 07:25:18 crc kubenswrapper[4675]: I0124 07:25:18.607114 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" event={"ID":"e9c128cc-910c-4ef2-9b56-14adf4d264b3","Type":"ContainerDied","Data":"f76dc390b199552873d6fce988d31a984953181f0421729e74dbff18258c271b"} Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.038786 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.102913 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wbmh\" (UniqueName: \"kubernetes.io/projected/e9c128cc-910c-4ef2-9b56-14adf4d264b3-kube-api-access-5wbmh\") pod \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.102976 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-inventory\") pod \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.103092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-ssh-key-openstack-edpm-ipam\") pod \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.118896 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c128cc-910c-4ef2-9b56-14adf4d264b3-kube-api-access-5wbmh" (OuterVolumeSpecName: "kube-api-access-5wbmh") pod "e9c128cc-910c-4ef2-9b56-14adf4d264b3" (UID: "e9c128cc-910c-4ef2-9b56-14adf4d264b3"). InnerVolumeSpecName "kube-api-access-5wbmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.132450 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e9c128cc-910c-4ef2-9b56-14adf4d264b3" (UID: "e9c128cc-910c-4ef2-9b56-14adf4d264b3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.134780 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-inventory" (OuterVolumeSpecName: "inventory") pod "e9c128cc-910c-4ef2-9b56-14adf4d264b3" (UID: "e9c128cc-910c-4ef2-9b56-14adf4d264b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.204996 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.205030 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wbmh\" (UniqueName: \"kubernetes.io/projected/e9c128cc-910c-4ef2-9b56-14adf4d264b3-kube-api-access-5wbmh\") on node \"crc\" DevicePath \"\"" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.205045 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.624063 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" event={"ID":"e9c128cc-910c-4ef2-9b56-14adf4d264b3","Type":"ContainerDied","Data":"0bf206a882066b29f1df3738d0e01d0d4bc1dc6db08d64d153e7b882e34c9d05"} Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.624108 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bf206a882066b29f1df3738d0e01d0d4bc1dc6db08d64d153e7b882e34c9d05" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.624126 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.713248 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv"] Jan 24 07:25:20 crc kubenswrapper[4675]: E0124 07:25:20.713656 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c128cc-910c-4ef2-9b56-14adf4d264b3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.713677 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c128cc-910c-4ef2-9b56-14adf4d264b3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.713880 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c128cc-910c-4ef2-9b56-14adf4d264b3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.714564 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.717890 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.718131 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.718391 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.719382 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.724615 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv"] Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.814683 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.814941 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7szzd\" (UniqueName: \"kubernetes.io/projected/27ad7637-701b-43e1-8440-0fd32522fc56-kube-api-access-7szzd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.815209 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.916647 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.916831 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7szzd\" (UniqueName: \"kubernetes.io/projected/27ad7637-701b-43e1-8440-0fd32522fc56-kube-api-access-7szzd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.916882 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.921776 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.922484 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.936969 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7szzd\" (UniqueName: \"kubernetes.io/projected/27ad7637-701b-43e1-8440-0fd32522fc56-kube-api-access-7szzd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:21 crc kubenswrapper[4675]: I0124 07:25:21.071310 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:21 crc kubenswrapper[4675]: I0124 07:25:21.602526 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv"] Jan 24 07:25:21 crc kubenswrapper[4675]: I0124 07:25:21.634027 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" event={"ID":"27ad7637-701b-43e1-8440-0fd32522fc56","Type":"ContainerStarted","Data":"ba0545c2af86b7f80e12ad50cc6fe9ae7dbf0381beb86e961ddc73a301493bfd"} Jan 24 07:25:23 crc kubenswrapper[4675]: I0124 07:25:23.651796 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" event={"ID":"27ad7637-701b-43e1-8440-0fd32522fc56","Type":"ContainerStarted","Data":"751118413c1e3ed7487377a3f697c22e66689c703a60f6fb3ee8356ce52f410a"} Jan 24 07:25:23 crc kubenswrapper[4675]: I0124 07:25:23.684235 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" podStartSLOduration=2.6262152309999998 podStartE2EDuration="3.684210898s" podCreationTimestamp="2026-01-24 07:25:20 +0000 UTC" firstStartedPulling="2026-01-24 07:25:21.601702327 +0000 UTC m=+1922.897807550" lastFinishedPulling="2026-01-24 07:25:22.659697994 +0000 UTC m=+1923.955803217" observedRunningTime="2026-01-24 07:25:23.675299561 +0000 UTC m=+1924.971404794" watchObservedRunningTime="2026-01-24 07:25:23.684210898 +0000 UTC m=+1924.980316141" Jan 24 07:25:30 crc kubenswrapper[4675]: I0124 07:25:30.042686 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-dxv2k"] Jan 24 07:25:30 crc kubenswrapper[4675]: I0124 07:25:30.049136 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-dxv2k"] Jan 24 07:25:30 crc kubenswrapper[4675]: I0124 07:25:30.953899 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd0aa104-48a4-4eab-afcc-2ef03d860551" path="/var/lib/kubelet/pods/cd0aa104-48a4-4eab-afcc-2ef03d860551/volumes" Jan 24 07:25:40 crc kubenswrapper[4675]: I0124 07:25:40.965020 4675 scope.go:117] "RemoveContainer" containerID="78f8083eacc7c22ec9dea19e2beb1b5b4e3cc8fc1e0078f1f2502d6499fe0c24" Jan 24 07:25:41 crc kubenswrapper[4675]: I0124 07:25:41.035081 4675 scope.go:117] "RemoveContainer" containerID="d284df73b7fd149e40dd2e61a4921f972d1ee1af66e5595a151269eb977744e4" Jan 24 07:25:41 crc kubenswrapper[4675]: I0124 07:25:41.094790 4675 scope.go:117] "RemoveContainer" containerID="28e02e05a169961e6a8905b7cf18ce1c42ea2b78ddd06aee7b4a61c2126390af" Jan 24 07:26:09 crc kubenswrapper[4675]: I0124 07:26:09.074252 4675 generic.go:334] "Generic (PLEG): container finished" podID="27ad7637-701b-43e1-8440-0fd32522fc56" containerID="751118413c1e3ed7487377a3f697c22e66689c703a60f6fb3ee8356ce52f410a" exitCode=0 Jan 24 07:26:09 crc kubenswrapper[4675]: I0124 07:26:09.074342 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" event={"ID":"27ad7637-701b-43e1-8440-0fd32522fc56","Type":"ContainerDied","Data":"751118413c1e3ed7487377a3f697c22e66689c703a60f6fb3ee8356ce52f410a"} Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.617245 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.701886 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-ssh-key-openstack-edpm-ipam\") pod \"27ad7637-701b-43e1-8440-0fd32522fc56\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.701960 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-inventory\") pod \"27ad7637-701b-43e1-8440-0fd32522fc56\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.702017 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7szzd\" (UniqueName: \"kubernetes.io/projected/27ad7637-701b-43e1-8440-0fd32522fc56-kube-api-access-7szzd\") pod \"27ad7637-701b-43e1-8440-0fd32522fc56\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.716704 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ad7637-701b-43e1-8440-0fd32522fc56-kube-api-access-7szzd" (OuterVolumeSpecName: "kube-api-access-7szzd") pod "27ad7637-701b-43e1-8440-0fd32522fc56" (UID: "27ad7637-701b-43e1-8440-0fd32522fc56"). InnerVolumeSpecName "kube-api-access-7szzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.740709 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27ad7637-701b-43e1-8440-0fd32522fc56" (UID: "27ad7637-701b-43e1-8440-0fd32522fc56"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.762890 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-inventory" (OuterVolumeSpecName: "inventory") pod "27ad7637-701b-43e1-8440-0fd32522fc56" (UID: "27ad7637-701b-43e1-8440-0fd32522fc56"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.806548 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.806585 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.806599 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7szzd\" (UniqueName: \"kubernetes.io/projected/27ad7637-701b-43e1-8440-0fd32522fc56-kube-api-access-7szzd\") on node \"crc\" DevicePath \"\"" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.095543 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" event={"ID":"27ad7637-701b-43e1-8440-0fd32522fc56","Type":"ContainerDied","Data":"ba0545c2af86b7f80e12ad50cc6fe9ae7dbf0381beb86e961ddc73a301493bfd"} Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.095604 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba0545c2af86b7f80e12ad50cc6fe9ae7dbf0381beb86e961ddc73a301493bfd" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.095626 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.232497 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm"] Jan 24 07:26:11 crc kubenswrapper[4675]: E0124 07:26:11.232912 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ad7637-701b-43e1-8440-0fd32522fc56" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.232931 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ad7637-701b-43e1-8440-0fd32522fc56" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.233081 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ad7637-701b-43e1-8440-0fd32522fc56" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.233682 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.240065 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.240369 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.240464 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.242609 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm"] Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.243924 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.317072 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpgxp\" (UniqueName: \"kubernetes.io/projected/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-kube-api-access-tpgxp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.317182 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.317249 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.419693 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.420186 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.420368 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpgxp\" (UniqueName: \"kubernetes.io/projected/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-kube-api-access-tpgxp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.425989 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.427036 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.436712 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpgxp\" (UniqueName: \"kubernetes.io/projected/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-kube-api-access-tpgxp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.555533 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:12 crc kubenswrapper[4675]: I0124 07:26:12.091923 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm"] Jan 24 07:26:12 crc kubenswrapper[4675]: I0124 07:26:12.105592 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" event={"ID":"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f","Type":"ContainerStarted","Data":"a99c83884277cad4ed2cc7428f3b4ffd3675633ee45914762062beb4b88d95c2"} Jan 24 07:26:13 crc kubenswrapper[4675]: I0124 07:26:13.114411 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" event={"ID":"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f","Type":"ContainerStarted","Data":"282c4b7e726d12b8ea26b87660396ed2ec7d6cd2371b1a805c4ecd4f72af3c0f"} Jan 24 07:26:13 crc kubenswrapper[4675]: I0124 07:26:13.136049 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" podStartSLOduration=1.54995397 podStartE2EDuration="2.136025057s" podCreationTimestamp="2026-01-24 07:26:11 +0000 UTC" firstStartedPulling="2026-01-24 07:26:12.092466541 +0000 UTC m=+1973.388571764" lastFinishedPulling="2026-01-24 07:26:12.678537618 +0000 UTC m=+1973.974642851" observedRunningTime="2026-01-24 07:26:13.128386851 +0000 UTC m=+1974.424492074" watchObservedRunningTime="2026-01-24 07:26:13.136025057 +0000 UTC m=+1974.432130280" Jan 24 07:26:38 crc kubenswrapper[4675]: I0124 07:26:38.629919 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:26:38 crc kubenswrapper[4675]: I0124 07:26:38.630480 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:27:08 crc kubenswrapper[4675]: I0124 07:27:08.630018 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:27:08 crc kubenswrapper[4675]: I0124 07:27:08.630466 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:27:15 crc kubenswrapper[4675]: I0124 07:27:15.664633 4675 generic.go:334] "Generic (PLEG): container finished" podID="eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" containerID="282c4b7e726d12b8ea26b87660396ed2ec7d6cd2371b1a805c4ecd4f72af3c0f" exitCode=0 Jan 24 07:27:15 crc kubenswrapper[4675]: I0124 07:27:15.665005 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" event={"ID":"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f","Type":"ContainerDied","Data":"282c4b7e726d12b8ea26b87660396ed2ec7d6cd2371b1a805c4ecd4f72af3c0f"} Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.210337 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.320949 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpgxp\" (UniqueName: \"kubernetes.io/projected/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-kube-api-access-tpgxp\") pod \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.321041 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-inventory\") pod \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.321139 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-ssh-key-openstack-edpm-ipam\") pod \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.332355 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-kube-api-access-tpgxp" (OuterVolumeSpecName: "kube-api-access-tpgxp") pod "eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" (UID: "eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f"). InnerVolumeSpecName "kube-api-access-tpgxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.349274 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" (UID: "eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.350832 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-inventory" (OuterVolumeSpecName: "inventory") pod "eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" (UID: "eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.423079 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpgxp\" (UniqueName: \"kubernetes.io/projected/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-kube-api-access-tpgxp\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.423119 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.423130 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.682927 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" event={"ID":"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f","Type":"ContainerDied","Data":"a99c83884277cad4ed2cc7428f3b4ffd3675633ee45914762062beb4b88d95c2"} Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.682981 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a99c83884277cad4ed2cc7428f3b4ffd3675633ee45914762062beb4b88d95c2" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.683334 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.780902 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wq6r9"] Jan 24 07:27:17 crc kubenswrapper[4675]: E0124 07:27:17.781575 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.781662 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.781987 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.782864 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.787741 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.787994 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.788120 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.788224 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.843338 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wq6r9"] Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.932089 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tb5d\" (UniqueName: \"kubernetes.io/projected/191f15b0-8a3b-4dc4-bc49-9003c61619bf-kube-api-access-5tb5d\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.932427 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.932552 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.034435 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.034521 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.034656 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tb5d\" (UniqueName: \"kubernetes.io/projected/191f15b0-8a3b-4dc4-bc49-9003c61619bf-kube-api-access-5tb5d\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.038306 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.038559 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.053855 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tb5d\" (UniqueName: \"kubernetes.io/projected/191f15b0-8a3b-4dc4-bc49-9003c61619bf-kube-api-access-5tb5d\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.143085 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.697234 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wq6r9"] Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.705885 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:27:19 crc kubenswrapper[4675]: I0124 07:27:19.199787 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:27:19 crc kubenswrapper[4675]: I0124 07:27:19.706425 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" event={"ID":"191f15b0-8a3b-4dc4-bc49-9003c61619bf","Type":"ContainerStarted","Data":"b538714ae8c39caa82c5c4a1821a82bf3d6640c5f8ea1e748a6ce8c9071fa698"} Jan 24 07:27:19 crc kubenswrapper[4675]: I0124 07:27:19.706841 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" event={"ID":"191f15b0-8a3b-4dc4-bc49-9003c61619bf","Type":"ContainerStarted","Data":"ecaa1c9e319c5290634a023a32d36022af2d97bea126b4183f283e33357cdc18"} Jan 24 07:27:19 crc kubenswrapper[4675]: I0124 07:27:19.727892 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" podStartSLOduration=2.237005032 podStartE2EDuration="2.727868559s" podCreationTimestamp="2026-01-24 07:27:17 +0000 UTC" firstStartedPulling="2026-01-24 07:27:18.705636651 +0000 UTC m=+2040.001741864" lastFinishedPulling="2026-01-24 07:27:19.196500158 +0000 UTC m=+2040.492605391" observedRunningTime="2026-01-24 07:27:19.72212693 +0000 UTC m=+2041.018232153" watchObservedRunningTime="2026-01-24 07:27:19.727868559 +0000 UTC m=+2041.023973782" Jan 24 07:27:27 crc kubenswrapper[4675]: I0124 07:27:27.773497 4675 generic.go:334] "Generic (PLEG): container finished" podID="191f15b0-8a3b-4dc4-bc49-9003c61619bf" containerID="b538714ae8c39caa82c5c4a1821a82bf3d6640c5f8ea1e748a6ce8c9071fa698" exitCode=0 Jan 24 07:27:27 crc kubenswrapper[4675]: I0124 07:27:27.773574 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" event={"ID":"191f15b0-8a3b-4dc4-bc49-9003c61619bf","Type":"ContainerDied","Data":"b538714ae8c39caa82c5c4a1821a82bf3d6640c5f8ea1e748a6ce8c9071fa698"} Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.210785 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.264662 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tb5d\" (UniqueName: \"kubernetes.io/projected/191f15b0-8a3b-4dc4-bc49-9003c61619bf-kube-api-access-5tb5d\") pod \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.264827 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-ssh-key-openstack-edpm-ipam\") pod \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.264897 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-inventory-0\") pod \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.270864 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/191f15b0-8a3b-4dc4-bc49-9003c61619bf-kube-api-access-5tb5d" (OuterVolumeSpecName: "kube-api-access-5tb5d") pod "191f15b0-8a3b-4dc4-bc49-9003c61619bf" (UID: "191f15b0-8a3b-4dc4-bc49-9003c61619bf"). InnerVolumeSpecName "kube-api-access-5tb5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.292490 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "191f15b0-8a3b-4dc4-bc49-9003c61619bf" (UID: "191f15b0-8a3b-4dc4-bc49-9003c61619bf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.304304 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "191f15b0-8a3b-4dc4-bc49-9003c61619bf" (UID: "191f15b0-8a3b-4dc4-bc49-9003c61619bf"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.367125 4675 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.367152 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tb5d\" (UniqueName: \"kubernetes.io/projected/191f15b0-8a3b-4dc4-bc49-9003c61619bf-kube-api-access-5tb5d\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.367165 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.795327 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" event={"ID":"191f15b0-8a3b-4dc4-bc49-9003c61619bf","Type":"ContainerDied","Data":"ecaa1c9e319c5290634a023a32d36022af2d97bea126b4183f283e33357cdc18"} Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.795371 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecaa1c9e319c5290634a023a32d36022af2d97bea126b4183f283e33357cdc18" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.795403 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.880337 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2"] Jan 24 07:27:29 crc kubenswrapper[4675]: E0124 07:27:29.880913 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191f15b0-8a3b-4dc4-bc49-9003c61619bf" containerName="ssh-known-hosts-edpm-deployment" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.881000 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="191f15b0-8a3b-4dc4-bc49-9003c61619bf" containerName="ssh-known-hosts-edpm-deployment" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.881264 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="191f15b0-8a3b-4dc4-bc49-9003c61619bf" containerName="ssh-known-hosts-edpm-deployment" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.881970 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.883858 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.884022 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.884086 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.889952 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.897106 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2"] Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.977075 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrj94\" (UniqueName: \"kubernetes.io/projected/3bc4008d-f8c6-4745-b524-d6136632cbfb-kube-api-access-vrj94\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.977376 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.977569 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.079028 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.079184 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrj94\" (UniqueName: \"kubernetes.io/projected/3bc4008d-f8c6-4745-b524-d6136632cbfb-kube-api-access-vrj94\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.079220 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.087485 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.098174 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrj94\" (UniqueName: \"kubernetes.io/projected/3bc4008d-f8c6-4745-b524-d6136632cbfb-kube-api-access-vrj94\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.099971 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.202276 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.723217 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2"] Jan 24 07:27:30 crc kubenswrapper[4675]: W0124 07:27:30.726938 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bc4008d_f8c6_4745_b524_d6136632cbfb.slice/crio-62c943b067351557a845e030e264d8abf86513b479ca52873f44fe498d880c01 WatchSource:0}: Error finding container 62c943b067351557a845e030e264d8abf86513b479ca52873f44fe498d880c01: Status 404 returned error can't find the container with id 62c943b067351557a845e030e264d8abf86513b479ca52873f44fe498d880c01 Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.804670 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" event={"ID":"3bc4008d-f8c6-4745-b524-d6136632cbfb","Type":"ContainerStarted","Data":"62c943b067351557a845e030e264d8abf86513b479ca52873f44fe498d880c01"} Jan 24 07:27:31 crc kubenswrapper[4675]: I0124 07:27:31.816379 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" event={"ID":"3bc4008d-f8c6-4745-b524-d6136632cbfb","Type":"ContainerStarted","Data":"cd266032e8c5ce16488fe5a3a2a9f6e67a18bbaf6addbbdece241e4ba080673d"} Jan 24 07:27:31 crc kubenswrapper[4675]: I0124 07:27:31.830190 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" podStartSLOduration=2.335300604 podStartE2EDuration="2.83017023s" podCreationTimestamp="2026-01-24 07:27:29 +0000 UTC" firstStartedPulling="2026-01-24 07:27:30.729702524 +0000 UTC m=+2052.025807747" lastFinishedPulling="2026-01-24 07:27:31.22457216 +0000 UTC m=+2052.520677373" observedRunningTime="2026-01-24 07:27:31.829266419 +0000 UTC m=+2053.125371642" watchObservedRunningTime="2026-01-24 07:27:31.83017023 +0000 UTC m=+2053.126275463" Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.629877 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.630139 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.630176 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.630795 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1c9273bc1d397c7b2b3e725108610e2ba92ba82858b4b6dca89da8ddff34bf2"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.630837 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://a1c9273bc1d397c7b2b3e725108610e2ba92ba82858b4b6dca89da8ddff34bf2" gracePeriod=600 Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.879054 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="a1c9273bc1d397c7b2b3e725108610e2ba92ba82858b4b6dca89da8ddff34bf2" exitCode=0 Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.879132 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"a1c9273bc1d397c7b2b3e725108610e2ba92ba82858b4b6dca89da8ddff34bf2"} Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.879221 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:27:39 crc kubenswrapper[4675]: I0124 07:27:39.892788 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71"} Jan 24 07:27:40 crc kubenswrapper[4675]: I0124 07:27:40.903474 4675 generic.go:334] "Generic (PLEG): container finished" podID="3bc4008d-f8c6-4745-b524-d6136632cbfb" containerID="cd266032e8c5ce16488fe5a3a2a9f6e67a18bbaf6addbbdece241e4ba080673d" exitCode=0 Jan 24 07:27:40 crc kubenswrapper[4675]: I0124 07:27:40.903561 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" event={"ID":"3bc4008d-f8c6-4745-b524-d6136632cbfb","Type":"ContainerDied","Data":"cd266032e8c5ce16488fe5a3a2a9f6e67a18bbaf6addbbdece241e4ba080673d"} Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.350280 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.549532 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-inventory\") pod \"3bc4008d-f8c6-4745-b524-d6136632cbfb\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.549800 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-ssh-key-openstack-edpm-ipam\") pod \"3bc4008d-f8c6-4745-b524-d6136632cbfb\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.549836 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrj94\" (UniqueName: \"kubernetes.io/projected/3bc4008d-f8c6-4745-b524-d6136632cbfb-kube-api-access-vrj94\") pod \"3bc4008d-f8c6-4745-b524-d6136632cbfb\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.556967 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc4008d-f8c6-4745-b524-d6136632cbfb-kube-api-access-vrj94" (OuterVolumeSpecName: "kube-api-access-vrj94") pod "3bc4008d-f8c6-4745-b524-d6136632cbfb" (UID: "3bc4008d-f8c6-4745-b524-d6136632cbfb"). InnerVolumeSpecName "kube-api-access-vrj94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.579166 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-inventory" (OuterVolumeSpecName: "inventory") pod "3bc4008d-f8c6-4745-b524-d6136632cbfb" (UID: "3bc4008d-f8c6-4745-b524-d6136632cbfb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.583932 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3bc4008d-f8c6-4745-b524-d6136632cbfb" (UID: "3bc4008d-f8c6-4745-b524-d6136632cbfb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.651703 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.651749 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrj94\" (UniqueName: \"kubernetes.io/projected/3bc4008d-f8c6-4745-b524-d6136632cbfb-kube-api-access-vrj94\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.651761 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.919715 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" event={"ID":"3bc4008d-f8c6-4745-b524-d6136632cbfb","Type":"ContainerDied","Data":"62c943b067351557a845e030e264d8abf86513b479ca52873f44fe498d880c01"} Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.919783 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62c943b067351557a845e030e264d8abf86513b479ca52873f44fe498d880c01" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.919815 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.037603 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw"] Jan 24 07:27:43 crc kubenswrapper[4675]: E0124 07:27:43.038005 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc4008d-f8c6-4745-b524-d6136632cbfb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.038025 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc4008d-f8c6-4745-b524-d6136632cbfb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.038262 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc4008d-f8c6-4745-b524-d6136632cbfb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.038948 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.047935 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.048023 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.048315 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.048540 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.052040 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw"] Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.060789 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.060914 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-969h9\" (UniqueName: \"kubernetes.io/projected/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-kube-api-access-969h9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.061040 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.162346 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.163473 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-969h9\" (UniqueName: \"kubernetes.io/projected/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-kube-api-access-969h9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.163632 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.167609 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.167782 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.181939 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-969h9\" (UniqueName: \"kubernetes.io/projected/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-kube-api-access-969h9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.354666 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: W0124 07:27:43.848923 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b1b0570_d3a2_4029_bcf8_f41144ea0f06.slice/crio-153e5260515a5ef74781522ae12a968dec2cda13eee158bf1af0107f0e4a2299 WatchSource:0}: Error finding container 153e5260515a5ef74781522ae12a968dec2cda13eee158bf1af0107f0e4a2299: Status 404 returned error can't find the container with id 153e5260515a5ef74781522ae12a968dec2cda13eee158bf1af0107f0e4a2299 Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.850242 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw"] Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.930402 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" event={"ID":"7b1b0570-d3a2-4029-bcf8-f41144ea0f06","Type":"ContainerStarted","Data":"153e5260515a5ef74781522ae12a968dec2cda13eee158bf1af0107f0e4a2299"} Jan 24 07:27:44 crc kubenswrapper[4675]: I0124 07:27:44.940395 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" event={"ID":"7b1b0570-d3a2-4029-bcf8-f41144ea0f06","Type":"ContainerStarted","Data":"384eb083a3203008b4bd5eb56dedcd565ab327ceb2439749925163e92ba9d96a"} Jan 24 07:27:44 crc kubenswrapper[4675]: I0124 07:27:44.966893 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" podStartSLOduration=1.4226402839999999 podStartE2EDuration="1.966875556s" podCreationTimestamp="2026-01-24 07:27:43 +0000 UTC" firstStartedPulling="2026-01-24 07:27:43.85156039 +0000 UTC m=+2065.147665623" lastFinishedPulling="2026-01-24 07:27:44.395795672 +0000 UTC m=+2065.691900895" observedRunningTime="2026-01-24 07:27:44.961896245 +0000 UTC m=+2066.258001468" watchObservedRunningTime="2026-01-24 07:27:44.966875556 +0000 UTC m=+2066.262980779" Jan 24 07:27:57 crc kubenswrapper[4675]: I0124 07:27:57.047536 4675 generic.go:334] "Generic (PLEG): container finished" podID="7b1b0570-d3a2-4029-bcf8-f41144ea0f06" containerID="384eb083a3203008b4bd5eb56dedcd565ab327ceb2439749925163e92ba9d96a" exitCode=0 Jan 24 07:27:57 crc kubenswrapper[4675]: I0124 07:27:57.047616 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" event={"ID":"7b1b0570-d3a2-4029-bcf8-f41144ea0f06","Type":"ContainerDied","Data":"384eb083a3203008b4bd5eb56dedcd565ab327ceb2439749925163e92ba9d96a"} Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.487583 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.664464 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-ssh-key-openstack-edpm-ipam\") pod \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.665104 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-inventory\") pod \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.665212 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-969h9\" (UniqueName: \"kubernetes.io/projected/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-kube-api-access-969h9\") pod \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.669424 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-kube-api-access-969h9" (OuterVolumeSpecName: "kube-api-access-969h9") pod "7b1b0570-d3a2-4029-bcf8-f41144ea0f06" (UID: "7b1b0570-d3a2-4029-bcf8-f41144ea0f06"). InnerVolumeSpecName "kube-api-access-969h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.694174 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7b1b0570-d3a2-4029-bcf8-f41144ea0f06" (UID: "7b1b0570-d3a2-4029-bcf8-f41144ea0f06"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.695765 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-inventory" (OuterVolumeSpecName: "inventory") pod "7b1b0570-d3a2-4029-bcf8-f41144ea0f06" (UID: "7b1b0570-d3a2-4029-bcf8-f41144ea0f06"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.766906 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-969h9\" (UniqueName: \"kubernetes.io/projected/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-kube-api-access-969h9\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.766932 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.766942 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.066275 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" event={"ID":"7b1b0570-d3a2-4029-bcf8-f41144ea0f06","Type":"ContainerDied","Data":"153e5260515a5ef74781522ae12a968dec2cda13eee158bf1af0107f0e4a2299"} Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.066320 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="153e5260515a5ef74781522ae12a968dec2cda13eee158bf1af0107f0e4a2299" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.066376 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.163995 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh"] Jan 24 07:27:59 crc kubenswrapper[4675]: E0124 07:27:59.164424 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1b0570-d3a2-4029-bcf8-f41144ea0f06" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.164788 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1b0570-d3a2-4029-bcf8-f41144ea0f06" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.165067 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1b0570-d3a2-4029-bcf8-f41144ea0f06" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.165765 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.168644 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.169118 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.169306 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.169653 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.169676 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.169861 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.173534 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174242 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174248 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174301 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174359 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174387 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174423 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174448 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174476 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174530 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5smg7\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-kube-api-access-5smg7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174610 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174641 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174684 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174841 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174890 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174958 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.185195 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh"] Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277180 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277236 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277294 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277321 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277343 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277370 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277555 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277593 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5smg7\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-kube-api-access-5smg7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277646 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277671 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277711 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277763 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277789 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277823 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.281399 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.282199 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.282415 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.282682 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.284046 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.285165 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.285651 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.287401 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.287701 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.288442 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.289149 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.290746 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.294890 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.300831 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5smg7\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-kube-api-access-5smg7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.486009 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:28:00 crc kubenswrapper[4675]: W0124 07:28:00.051287 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d09456f_a230_420b_b288_c0dc3e8a6e22.slice/crio-e17492474e3aa3de10585a9194f9e6a5ae485bbb43d7f063936f087a53e423e8 WatchSource:0}: Error finding container e17492474e3aa3de10585a9194f9e6a5ae485bbb43d7f063936f087a53e423e8: Status 404 returned error can't find the container with id e17492474e3aa3de10585a9194f9e6a5ae485bbb43d7f063936f087a53e423e8 Jan 24 07:28:00 crc kubenswrapper[4675]: I0124 07:28:00.055876 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh"] Jan 24 07:28:00 crc kubenswrapper[4675]: I0124 07:28:00.074103 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" event={"ID":"2d09456f-a230-420b-b288-c0dc3e8a6e22","Type":"ContainerStarted","Data":"e17492474e3aa3de10585a9194f9e6a5ae485bbb43d7f063936f087a53e423e8"} Jan 24 07:28:02 crc kubenswrapper[4675]: I0124 07:28:02.094169 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" event={"ID":"2d09456f-a230-420b-b288-c0dc3e8a6e22","Type":"ContainerStarted","Data":"abcb54c0d3418aeb602c69c7b34a550299227d39f039236a8c91cb87b227c876"} Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.017015 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" podStartSLOduration=5.105243446 podStartE2EDuration="6.016991405s" podCreationTimestamp="2026-01-24 07:27:59 +0000 UTC" firstStartedPulling="2026-01-24 07:28:00.054846897 +0000 UTC m=+2081.350952120" lastFinishedPulling="2026-01-24 07:28:00.966594856 +0000 UTC m=+2082.262700079" observedRunningTime="2026-01-24 07:28:02.130136022 +0000 UTC m=+2083.426241255" watchObservedRunningTime="2026-01-24 07:28:05.016991405 +0000 UTC m=+2086.313096628" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.027367 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rh8kw"] Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.029738 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.041581 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rh8kw"] Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.200119 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7p4j\" (UniqueName: \"kubernetes.io/projected/3281384d-f7c7-4579-a0ef-16e9b131004c-kube-api-access-j7p4j\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.200191 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-catalog-content\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.200230 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-utilities\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.302157 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-catalog-content\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.302267 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-utilities\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.302447 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7p4j\" (UniqueName: \"kubernetes.io/projected/3281384d-f7c7-4579-a0ef-16e9b131004c-kube-api-access-j7p4j\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.302706 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-catalog-content\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.302823 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-utilities\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.335193 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7p4j\" (UniqueName: \"kubernetes.io/projected/3281384d-f7c7-4579-a0ef-16e9b131004c-kube-api-access-j7p4j\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.355555 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: W0124 07:28:05.855508 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b WatchSource:0}: Error finding container cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b: Status 404 returned error can't find the container with id cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.855770 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rh8kw"] Jan 24 07:28:06 crc kubenswrapper[4675]: I0124 07:28:06.135204 4675 generic.go:334] "Generic (PLEG): container finished" podID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerID="a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b" exitCode=0 Jan 24 07:28:06 crc kubenswrapper[4675]: I0124 07:28:06.135248 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8kw" event={"ID":"3281384d-f7c7-4579-a0ef-16e9b131004c","Type":"ContainerDied","Data":"a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b"} Jan 24 07:28:06 crc kubenswrapper[4675]: I0124 07:28:06.135274 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8kw" event={"ID":"3281384d-f7c7-4579-a0ef-16e9b131004c","Type":"ContainerStarted","Data":"cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b"} Jan 24 07:28:08 crc kubenswrapper[4675]: I0124 07:28:08.157943 4675 generic.go:334] "Generic (PLEG): container finished" podID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerID="41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd" exitCode=0 Jan 24 07:28:08 crc kubenswrapper[4675]: I0124 07:28:08.158046 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8kw" event={"ID":"3281384d-f7c7-4579-a0ef-16e9b131004c","Type":"ContainerDied","Data":"41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd"} Jan 24 07:28:09 crc kubenswrapper[4675]: I0124 07:28:09.183644 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8kw" event={"ID":"3281384d-f7c7-4579-a0ef-16e9b131004c","Type":"ContainerStarted","Data":"971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285"} Jan 24 07:28:09 crc kubenswrapper[4675]: I0124 07:28:09.212262 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rh8kw" podStartSLOduration=2.7259307919999998 podStartE2EDuration="5.212245338s" podCreationTimestamp="2026-01-24 07:28:04 +0000 UTC" firstStartedPulling="2026-01-24 07:28:06.137950348 +0000 UTC m=+2087.434055571" lastFinishedPulling="2026-01-24 07:28:08.624264894 +0000 UTC m=+2089.920370117" observedRunningTime="2026-01-24 07:28:09.20574429 +0000 UTC m=+2090.501849503" watchObservedRunningTime="2026-01-24 07:28:09.212245338 +0000 UTC m=+2090.508350561" Jan 24 07:28:15 crc kubenswrapper[4675]: I0124 07:28:15.355811 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:15 crc kubenswrapper[4675]: I0124 07:28:15.356390 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:15 crc kubenswrapper[4675]: I0124 07:28:15.407250 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:16 crc kubenswrapper[4675]: I0124 07:28:16.288163 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:16 crc kubenswrapper[4675]: I0124 07:28:16.337489 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rh8kw"] Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.256385 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rh8kw" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="registry-server" containerID="cri-o://971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285" gracePeriod=2 Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.673321 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.743979 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-utilities\") pod \"3281384d-f7c7-4579-a0ef-16e9b131004c\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.745126 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-utilities" (OuterVolumeSpecName: "utilities") pod "3281384d-f7c7-4579-a0ef-16e9b131004c" (UID: "3281384d-f7c7-4579-a0ef-16e9b131004c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.846340 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-catalog-content\") pod \"3281384d-f7c7-4579-a0ef-16e9b131004c\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.846390 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7p4j\" (UniqueName: \"kubernetes.io/projected/3281384d-f7c7-4579-a0ef-16e9b131004c-kube-api-access-j7p4j\") pod \"3281384d-f7c7-4579-a0ef-16e9b131004c\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.846842 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.852952 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3281384d-f7c7-4579-a0ef-16e9b131004c-kube-api-access-j7p4j" (OuterVolumeSpecName: "kube-api-access-j7p4j") pod "3281384d-f7c7-4579-a0ef-16e9b131004c" (UID: "3281384d-f7c7-4579-a0ef-16e9b131004c"). InnerVolumeSpecName "kube-api-access-j7p4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.905431 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3281384d-f7c7-4579-a0ef-16e9b131004c" (UID: "3281384d-f7c7-4579-a0ef-16e9b131004c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.950339 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.950371 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7p4j\" (UniqueName: \"kubernetes.io/projected/3281384d-f7c7-4579-a0ef-16e9b131004c-kube-api-access-j7p4j\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.268848 4675 generic.go:334] "Generic (PLEG): container finished" podID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerID="971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285" exitCode=0 Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.268913 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.268932 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8kw" event={"ID":"3281384d-f7c7-4579-a0ef-16e9b131004c","Type":"ContainerDied","Data":"971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285"} Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.269324 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8kw" event={"ID":"3281384d-f7c7-4579-a0ef-16e9b131004c","Type":"ContainerDied","Data":"cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b"} Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.269345 4675 scope.go:117] "RemoveContainer" containerID="971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.297966 4675 scope.go:117] "RemoveContainer" containerID="41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.304289 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rh8kw"] Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.319790 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rh8kw"] Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.345990 4675 scope.go:117] "RemoveContainer" containerID="a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.375331 4675 scope.go:117] "RemoveContainer" containerID="971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285" Jan 24 07:28:19 crc kubenswrapper[4675]: E0124 07:28:19.376095 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285\": container with ID starting with 971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285 not found: ID does not exist" containerID="971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.376124 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285"} err="failed to get container status \"971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285\": rpc error: code = NotFound desc = could not find container \"971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285\": container with ID starting with 971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285 not found: ID does not exist" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.376143 4675 scope.go:117] "RemoveContainer" containerID="41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd" Jan 24 07:28:19 crc kubenswrapper[4675]: E0124 07:28:19.376531 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd\": container with ID starting with 41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd not found: ID does not exist" containerID="41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.376575 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd"} err="failed to get container status \"41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd\": rpc error: code = NotFound desc = could not find container \"41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd\": container with ID starting with 41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd not found: ID does not exist" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.376595 4675 scope.go:117] "RemoveContainer" containerID="a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b" Jan 24 07:28:19 crc kubenswrapper[4675]: E0124 07:28:19.376944 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b\": container with ID starting with a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b not found: ID does not exist" containerID="a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.376963 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b"} err="failed to get container status \"a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b\": rpc error: code = NotFound desc = could not find container \"a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b\": container with ID starting with a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b not found: ID does not exist" Jan 24 07:28:20 crc kubenswrapper[4675]: I0124 07:28:20.951711 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" path="/var/lib/kubelet/pods/3281384d-f7c7-4579-a0ef-16e9b131004c/volumes" Jan 24 07:28:27 crc kubenswrapper[4675]: E0124 07:28:27.099782 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:28:37 crc kubenswrapper[4675]: E0124 07:28:37.362525 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:28:44 crc kubenswrapper[4675]: I0124 07:28:44.484607 4675 generic.go:334] "Generic (PLEG): container finished" podID="2d09456f-a230-420b-b288-c0dc3e8a6e22" containerID="abcb54c0d3418aeb602c69c7b34a550299227d39f039236a8c91cb87b227c876" exitCode=0 Jan 24 07:28:44 crc kubenswrapper[4675]: I0124 07:28:44.484780 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" event={"ID":"2d09456f-a230-420b-b288-c0dc3e8a6e22","Type":"ContainerDied","Data":"abcb54c0d3418aeb602c69c7b34a550299227d39f039236a8c91cb87b227c876"} Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.053840 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.220232 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-inventory\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.220710 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-telemetry-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.220820 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-ovn-default-certs-0\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.220929 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221069 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ovn-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221160 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5smg7\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-kube-api-access-5smg7\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221252 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221351 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221437 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-neutron-metadata-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221573 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-nova-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221651 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ssh-key-openstack-edpm-ipam\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221753 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-bootstrap-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221841 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-libvirt-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221940 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-repo-setup-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.226370 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.227112 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-kube-api-access-5smg7" (OuterVolumeSpecName: "kube-api-access-5smg7") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "kube-api-access-5smg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.229217 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.229698 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.229847 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.231055 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.231307 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.234165 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.234528 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.235797 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.240853 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.242889 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.251819 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.259900 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-inventory" (OuterVolumeSpecName: "inventory") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324092 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324268 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324359 4675 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324425 4675 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324484 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324543 4675 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324600 4675 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324663 4675 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324840 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324943 4675 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.325028 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.325091 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.325151 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.325214 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5smg7\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-kube-api-access-5smg7\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.501559 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" event={"ID":"2d09456f-a230-420b-b288-c0dc3e8a6e22","Type":"ContainerDied","Data":"e17492474e3aa3de10585a9194f9e6a5ae485bbb43d7f063936f087a53e423e8"} Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.501872 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e17492474e3aa3de10585a9194f9e6a5ae485bbb43d7f063936f087a53e423e8" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.501593 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.710389 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln"] Jan 24 07:28:46 crc kubenswrapper[4675]: E0124 07:28:46.710916 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="registry-server" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.710939 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="registry-server" Jan 24 07:28:46 crc kubenswrapper[4675]: E0124 07:28:46.710966 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="extract-content" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.710975 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="extract-content" Jan 24 07:28:46 crc kubenswrapper[4675]: E0124 07:28:46.710990 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="extract-utilities" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.711000 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="extract-utilities" Jan 24 07:28:46 crc kubenswrapper[4675]: E0124 07:28:46.711030 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d09456f-a230-420b-b288-c0dc3e8a6e22" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.711040 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d09456f-a230-420b-b288-c0dc3e8a6e22" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.711251 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="registry-server" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.711273 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d09456f-a230-420b-b288-c0dc3e8a6e22" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.713352 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.717124 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.717293 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.722159 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.722369 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.722926 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.741369 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln"] Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.835975 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.836050 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.836096 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.836138 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.836185 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2chfb\" (UniqueName: \"kubernetes.io/projected/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-kube-api-access-2chfb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.938006 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2chfb\" (UniqueName: \"kubernetes.io/projected/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-kube-api-access-2chfb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.938088 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.938805 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.938856 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.938904 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.939603 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.943266 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.955044 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.955481 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2chfb\" (UniqueName: \"kubernetes.io/projected/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-kube-api-access-2chfb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.964961 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:47 crc kubenswrapper[4675]: I0124 07:28:47.036350 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:47 crc kubenswrapper[4675]: I0124 07:28:47.653316 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln"] Jan 24 07:28:47 crc kubenswrapper[4675]: E0124 07:28:47.709833 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:28:48 crc kubenswrapper[4675]: I0124 07:28:48.521554 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" event={"ID":"3e407880-d27a-4aa2-bb81-a87bb20ffcf1","Type":"ContainerStarted","Data":"b05aacdca52badee5e9189ade6e71c65277d2313178c6e5ed17f772dd8c61fe9"} Jan 24 07:28:48 crc kubenswrapper[4675]: I0124 07:28:48.521997 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" event={"ID":"3e407880-d27a-4aa2-bb81-a87bb20ffcf1","Type":"ContainerStarted","Data":"ddb9f54d74c2e9997a2e10177f7d4e869a6f79eac0c430b061da4c9675fb428b"} Jan 24 07:28:48 crc kubenswrapper[4675]: I0124 07:28:48.535600 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" podStartSLOduration=2.064189265 podStartE2EDuration="2.535582251s" podCreationTimestamp="2026-01-24 07:28:46 +0000 UTC" firstStartedPulling="2026-01-24 07:28:47.654691862 +0000 UTC m=+2128.950797085" lastFinishedPulling="2026-01-24 07:28:48.126084848 +0000 UTC m=+2129.422190071" observedRunningTime="2026-01-24 07:28:48.535181912 +0000 UTC m=+2129.831287135" watchObservedRunningTime="2026-01-24 07:28:48.535582251 +0000 UTC m=+2129.831687474" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.626462 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v6cgb"] Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.628704 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.643191 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v6cgb"] Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.794406 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwvz7\" (UniqueName: \"kubernetes.io/projected/35a329bb-9b28-4ce2-bf06-3eab6c480c22-kube-api-access-hwvz7\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.794666 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-utilities\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.794861 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-catalog-content\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.830031 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h8hbw"] Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.831752 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.888239 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8hbw"] Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.896737 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-utilities\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.896827 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-catalog-content\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.896944 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwvz7\" (UniqueName: \"kubernetes.io/projected/35a329bb-9b28-4ce2-bf06-3eab6c480c22-kube-api-access-hwvz7\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.897167 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-utilities\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.897374 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-catalog-content\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.925434 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwvz7\" (UniqueName: \"kubernetes.io/projected/35a329bb-9b28-4ce2-bf06-3eab6c480c22-kube-api-access-hwvz7\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.953638 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:49.998914 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-utilities\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:49.999071 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cn46\" (UniqueName: \"kubernetes.io/projected/a04cc3fe-10f9-4d63-b55d-3717957a05cb-kube-api-access-4cn46\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:49.999108 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-catalog-content\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.100640 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-utilities\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.100818 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cn46\" (UniqueName: \"kubernetes.io/projected/a04cc3fe-10f9-4d63-b55d-3717957a05cb-kube-api-access-4cn46\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.100847 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-catalog-content\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.101213 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-utilities\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.101317 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-catalog-content\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.129767 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cn46\" (UniqueName: \"kubernetes.io/projected/a04cc3fe-10f9-4d63-b55d-3717957a05cb-kube-api-access-4cn46\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.147744 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.666682 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v6cgb"] Jan 24 07:28:50 crc kubenswrapper[4675]: W0124 07:28:50.691706 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35a329bb_9b28_4ce2_bf06_3eab6c480c22.slice/crio-e33aa3a3a2da850636ed533d417ddd47ba91543378ce7f702e237bbac528dc61 WatchSource:0}: Error finding container e33aa3a3a2da850636ed533d417ddd47ba91543378ce7f702e237bbac528dc61: Status 404 returned error can't find the container with id e33aa3a3a2da850636ed533d417ddd47ba91543378ce7f702e237bbac528dc61 Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.092278 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8hbw"] Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.563041 4675 generic.go:334] "Generic (PLEG): container finished" podID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerID="8a89e0914a7f452c1841d416015d4a4a17414c25699d2ec756c984d8c9a13264" exitCode=0 Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.563126 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerDied","Data":"8a89e0914a7f452c1841d416015d4a4a17414c25699d2ec756c984d8c9a13264"} Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.563163 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerStarted","Data":"d36464ed060d00cae74b46458c91b3d1d178d0201cf341fe273c0e02b6c5bbe1"} Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.564831 4675 generic.go:334] "Generic (PLEG): container finished" podID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerID="21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027" exitCode=0 Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.564850 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerDied","Data":"21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027"} Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.564866 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerStarted","Data":"e33aa3a3a2da850636ed533d417ddd47ba91543378ce7f702e237bbac528dc61"} Jan 24 07:28:52 crc kubenswrapper[4675]: I0124 07:28:52.577030 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerStarted","Data":"2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec"} Jan 24 07:28:53 crc kubenswrapper[4675]: I0124 07:28:53.589344 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerStarted","Data":"1de9f85016465f0bd2647bc6ec7fc555fb0db597391bbad5d3605a6da326b285"} Jan 24 07:28:56 crc kubenswrapper[4675]: I0124 07:28:56.623903 4675 generic.go:334] "Generic (PLEG): container finished" podID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerID="2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec" exitCode=0 Jan 24 07:28:56 crc kubenswrapper[4675]: I0124 07:28:56.623963 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerDied","Data":"2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec"} Jan 24 07:28:56 crc kubenswrapper[4675]: I0124 07:28:56.628147 4675 generic.go:334] "Generic (PLEG): container finished" podID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerID="1de9f85016465f0bd2647bc6ec7fc555fb0db597391bbad5d3605a6da326b285" exitCode=0 Jan 24 07:28:56 crc kubenswrapper[4675]: I0124 07:28:56.628199 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerDied","Data":"1de9f85016465f0bd2647bc6ec7fc555fb0db597391bbad5d3605a6da326b285"} Jan 24 07:28:57 crc kubenswrapper[4675]: E0124 07:28:57.997106 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b\": RecentStats: unable to find data in memory cache]" Jan 24 07:28:58 crc kubenswrapper[4675]: I0124 07:28:58.658660 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerStarted","Data":"7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d"} Jan 24 07:28:58 crc kubenswrapper[4675]: I0124 07:28:58.668514 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerStarted","Data":"4e7f3af280a659bcaa6cd2fac0059a4618004e98d083cf59fd090c570fe7e36e"} Jan 24 07:28:58 crc kubenswrapper[4675]: I0124 07:28:58.700577 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v6cgb" podStartSLOduration=3.968572093 podStartE2EDuration="9.700529345s" podCreationTimestamp="2026-01-24 07:28:49 +0000 UTC" firstStartedPulling="2026-01-24 07:28:51.566569721 +0000 UTC m=+2132.862674944" lastFinishedPulling="2026-01-24 07:28:57.298526973 +0000 UTC m=+2138.594632196" observedRunningTime="2026-01-24 07:28:58.68959384 +0000 UTC m=+2139.985699063" watchObservedRunningTime="2026-01-24 07:28:58.700529345 +0000 UTC m=+2139.996634598" Jan 24 07:28:59 crc kubenswrapper[4675]: I0124 07:28:59.953935 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:59 crc kubenswrapper[4675]: I0124 07:28:59.953987 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:29:00 crc kubenswrapper[4675]: I0124 07:29:00.149300 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:29:00 crc kubenswrapper[4675]: I0124 07:29:00.149680 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:29:01 crc kubenswrapper[4675]: I0124 07:29:01.115266 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-v6cgb" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="registry-server" probeResult="failure" output=< Jan 24 07:29:01 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:29:01 crc kubenswrapper[4675]: > Jan 24 07:29:01 crc kubenswrapper[4675]: I0124 07:29:01.192667 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-h8hbw" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="registry-server" probeResult="failure" output=< Jan 24 07:29:01 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:29:01 crc kubenswrapper[4675]: > Jan 24 07:29:08 crc kubenswrapper[4675]: E0124 07:29:08.210983 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:29:09 crc kubenswrapper[4675]: I0124 07:29:09.998365 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:29:10 crc kubenswrapper[4675]: I0124 07:29:10.025221 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h8hbw" podStartSLOduration=15.389534198 podStartE2EDuration="21.025191875s" podCreationTimestamp="2026-01-24 07:28:49 +0000 UTC" firstStartedPulling="2026-01-24 07:28:51.565204907 +0000 UTC m=+2132.861310130" lastFinishedPulling="2026-01-24 07:28:57.200862594 +0000 UTC m=+2138.496967807" observedRunningTime="2026-01-24 07:28:58.724668031 +0000 UTC m=+2140.020773254" watchObservedRunningTime="2026-01-24 07:29:10.025191875 +0000 UTC m=+2151.321297138" Jan 24 07:29:10 crc kubenswrapper[4675]: I0124 07:29:10.055484 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:29:10 crc kubenswrapper[4675]: I0124 07:29:10.194272 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:29:10 crc kubenswrapper[4675]: I0124 07:29:10.236834 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v6cgb"] Jan 24 07:29:10 crc kubenswrapper[4675]: I0124 07:29:10.251245 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:29:11 crc kubenswrapper[4675]: I0124 07:29:11.265548 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v6cgb" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="registry-server" containerID="cri-o://7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d" gracePeriod=2 Jan 24 07:29:11 crc kubenswrapper[4675]: I0124 07:29:11.898692 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.086101 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-utilities\") pod \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.086315 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-catalog-content\") pod \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.086441 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwvz7\" (UniqueName: \"kubernetes.io/projected/35a329bb-9b28-4ce2-bf06-3eab6c480c22-kube-api-access-hwvz7\") pod \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.086985 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-utilities" (OuterVolumeSpecName: "utilities") pod "35a329bb-9b28-4ce2-bf06-3eab6c480c22" (UID: "35a329bb-9b28-4ce2-bf06-3eab6c480c22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.087521 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.092088 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a329bb-9b28-4ce2-bf06-3eab6c480c22-kube-api-access-hwvz7" (OuterVolumeSpecName: "kube-api-access-hwvz7") pod "35a329bb-9b28-4ce2-bf06-3eab6c480c22" (UID: "35a329bb-9b28-4ce2-bf06-3eab6c480c22"). InnerVolumeSpecName "kube-api-access-hwvz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.144665 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35a329bb-9b28-4ce2-bf06-3eab6c480c22" (UID: "35a329bb-9b28-4ce2-bf06-3eab6c480c22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.189265 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.189299 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwvz7\" (UniqueName: \"kubernetes.io/projected/35a329bb-9b28-4ce2-bf06-3eab6c480c22-kube-api-access-hwvz7\") on node \"crc\" DevicePath \"\"" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.275482 4675 generic.go:334] "Generic (PLEG): container finished" podID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerID="7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d" exitCode=0 Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.275540 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.275545 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerDied","Data":"7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d"} Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.275987 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerDied","Data":"e33aa3a3a2da850636ed533d417ddd47ba91543378ce7f702e237bbac528dc61"} Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.276011 4675 scope.go:117] "RemoveContainer" containerID="7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.339881 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v6cgb"] Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.345010 4675 scope.go:117] "RemoveContainer" containerID="2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.353881 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v6cgb"] Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.455120 4675 scope.go:117] "RemoveContainer" containerID="21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.460182 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8hbw"] Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.467799 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h8hbw" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="registry-server" containerID="cri-o://4e7f3af280a659bcaa6cd2fac0059a4618004e98d083cf59fd090c570fe7e36e" gracePeriod=2 Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.582417 4675 scope.go:117] "RemoveContainer" containerID="7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d" Jan 24 07:29:12 crc kubenswrapper[4675]: E0124 07:29:12.586426 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d\": container with ID starting with 7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d not found: ID does not exist" containerID="7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.586465 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d"} err="failed to get container status \"7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d\": rpc error: code = NotFound desc = could not find container \"7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d\": container with ID starting with 7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d not found: ID does not exist" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.586491 4675 scope.go:117] "RemoveContainer" containerID="2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec" Jan 24 07:29:12 crc kubenswrapper[4675]: E0124 07:29:12.587308 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec\": container with ID starting with 2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec not found: ID does not exist" containerID="2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.587331 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec"} err="failed to get container status \"2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec\": rpc error: code = NotFound desc = could not find container \"2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec\": container with ID starting with 2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec not found: ID does not exist" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.587344 4675 scope.go:117] "RemoveContainer" containerID="21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027" Jan 24 07:29:12 crc kubenswrapper[4675]: E0124 07:29:12.587565 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027\": container with ID starting with 21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027 not found: ID does not exist" containerID="21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.587583 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027"} err="failed to get container status \"21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027\": rpc error: code = NotFound desc = could not find container \"21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027\": container with ID starting with 21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027 not found: ID does not exist" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.953283 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" path="/var/lib/kubelet/pods/35a329bb-9b28-4ce2-bf06-3eab6c480c22/volumes" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.290978 4675 generic.go:334] "Generic (PLEG): container finished" podID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerID="4e7f3af280a659bcaa6cd2fac0059a4618004e98d083cf59fd090c570fe7e36e" exitCode=0 Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.291040 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerDied","Data":"4e7f3af280a659bcaa6cd2fac0059a4618004e98d083cf59fd090c570fe7e36e"} Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.565737 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.719455 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cn46\" (UniqueName: \"kubernetes.io/projected/a04cc3fe-10f9-4d63-b55d-3717957a05cb-kube-api-access-4cn46\") pod \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.719552 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-utilities\") pod \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.719630 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-catalog-content\") pod \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.720423 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-utilities" (OuterVolumeSpecName: "utilities") pod "a04cc3fe-10f9-4d63-b55d-3717957a05cb" (UID: "a04cc3fe-10f9-4d63-b55d-3717957a05cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.728262 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04cc3fe-10f9-4d63-b55d-3717957a05cb-kube-api-access-4cn46" (OuterVolumeSpecName: "kube-api-access-4cn46") pod "a04cc3fe-10f9-4d63-b55d-3717957a05cb" (UID: "a04cc3fe-10f9-4d63-b55d-3717957a05cb"). InnerVolumeSpecName "kube-api-access-4cn46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.739918 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a04cc3fe-10f9-4d63-b55d-3717957a05cb" (UID: "a04cc3fe-10f9-4d63-b55d-3717957a05cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.822048 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cn46\" (UniqueName: \"kubernetes.io/projected/a04cc3fe-10f9-4d63-b55d-3717957a05cb-kube-api-access-4cn46\") on node \"crc\" DevicePath \"\"" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.822083 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.822092 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.303264 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerDied","Data":"d36464ed060d00cae74b46458c91b3d1d178d0201cf341fe273c0e02b6c5bbe1"} Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.303633 4675 scope.go:117] "RemoveContainer" containerID="4e7f3af280a659bcaa6cd2fac0059a4618004e98d083cf59fd090c570fe7e36e" Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.303363 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.326494 4675 scope.go:117] "RemoveContainer" containerID="1de9f85016465f0bd2647bc6ec7fc555fb0db597391bbad5d3605a6da326b285" Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.345452 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8hbw"] Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.355753 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8hbw"] Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.356447 4675 scope.go:117] "RemoveContainer" containerID="8a89e0914a7f452c1841d416015d4a4a17414c25699d2ec756c984d8c9a13264" Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.955349 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" path="/var/lib/kubelet/pods/a04cc3fe-10f9-4d63-b55d-3717957a05cb/volumes" Jan 24 07:29:18 crc kubenswrapper[4675]: E0124 07:29:18.432463 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b\": RecentStats: unable to find data in memory cache]" Jan 24 07:29:38 crc kubenswrapper[4675]: I0124 07:29:38.630438 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:29:38 crc kubenswrapper[4675]: I0124 07:29:38.630931 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.150387 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz"] Jan 24 07:30:00 crc kubenswrapper[4675]: E0124 07:30:00.151160 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="extract-content" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151172 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="extract-content" Jan 24 07:30:00 crc kubenswrapper[4675]: E0124 07:30:00.151195 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="registry-server" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151232 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="registry-server" Jan 24 07:30:00 crc kubenswrapper[4675]: E0124 07:30:00.151247 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="registry-server" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151253 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="registry-server" Jan 24 07:30:00 crc kubenswrapper[4675]: E0124 07:30:00.151278 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="extract-content" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151284 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="extract-content" Jan 24 07:30:00 crc kubenswrapper[4675]: E0124 07:30:00.151294 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="extract-utilities" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151299 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="extract-utilities" Jan 24 07:30:00 crc kubenswrapper[4675]: E0124 07:30:00.151313 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="extract-utilities" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151318 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="extract-utilities" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151470 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="registry-server" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151486 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="registry-server" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.152131 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.158862 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.159434 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.163155 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz"] Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.307514 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e2fd991-68ff-45d8-bc15-e245d8de85b6-secret-volume\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.307831 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlztk\" (UniqueName: \"kubernetes.io/projected/0e2fd991-68ff-45d8-bc15-e245d8de85b6-kube-api-access-rlztk\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.307994 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e2fd991-68ff-45d8-bc15-e245d8de85b6-config-volume\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.409979 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e2fd991-68ff-45d8-bc15-e245d8de85b6-secret-volume\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.410141 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlztk\" (UniqueName: \"kubernetes.io/projected/0e2fd991-68ff-45d8-bc15-e245d8de85b6-kube-api-access-rlztk\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.410272 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e2fd991-68ff-45d8-bc15-e245d8de85b6-config-volume\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.411999 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e2fd991-68ff-45d8-bc15-e245d8de85b6-config-volume\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.415835 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e2fd991-68ff-45d8-bc15-e245d8de85b6-secret-volume\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.444019 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlztk\" (UniqueName: \"kubernetes.io/projected/0e2fd991-68ff-45d8-bc15-e245d8de85b6-kube-api-access-rlztk\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.480662 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:01 crc kubenswrapper[4675]: I0124 07:30:01.437490 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz"] Jan 24 07:30:02 crc kubenswrapper[4675]: I0124 07:30:02.081545 4675 generic.go:334] "Generic (PLEG): container finished" podID="0e2fd991-68ff-45d8-bc15-e245d8de85b6" containerID="22d304cf2a431fcb8000201803e0e5ad01887c331686ce49b53237ea0966b67d" exitCode=0 Jan 24 07:30:02 crc kubenswrapper[4675]: I0124 07:30:02.081631 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" event={"ID":"0e2fd991-68ff-45d8-bc15-e245d8de85b6","Type":"ContainerDied","Data":"22d304cf2a431fcb8000201803e0e5ad01887c331686ce49b53237ea0966b67d"} Jan 24 07:30:02 crc kubenswrapper[4675]: I0124 07:30:02.081863 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" event={"ID":"0e2fd991-68ff-45d8-bc15-e245d8de85b6","Type":"ContainerStarted","Data":"5007302a9d841f599454539ac30af2b2e22d273022c50491f661165afa9ea924"} Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.419323 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.488529 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlztk\" (UniqueName: \"kubernetes.io/projected/0e2fd991-68ff-45d8-bc15-e245d8de85b6-kube-api-access-rlztk\") pod \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.488954 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e2fd991-68ff-45d8-bc15-e245d8de85b6-secret-volume\") pod \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.489281 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e2fd991-68ff-45d8-bc15-e245d8de85b6-config-volume\") pod \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.490297 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e2fd991-68ff-45d8-bc15-e245d8de85b6-config-volume" (OuterVolumeSpecName: "config-volume") pod "0e2fd991-68ff-45d8-bc15-e245d8de85b6" (UID: "0e2fd991-68ff-45d8-bc15-e245d8de85b6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.494219 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e2fd991-68ff-45d8-bc15-e245d8de85b6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0e2fd991-68ff-45d8-bc15-e245d8de85b6" (UID: "0e2fd991-68ff-45d8-bc15-e245d8de85b6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.500073 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2fd991-68ff-45d8-bc15-e245d8de85b6-kube-api-access-rlztk" (OuterVolumeSpecName: "kube-api-access-rlztk") pod "0e2fd991-68ff-45d8-bc15-e245d8de85b6" (UID: "0e2fd991-68ff-45d8-bc15-e245d8de85b6"). InnerVolumeSpecName "kube-api-access-rlztk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.591617 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e2fd991-68ff-45d8-bc15-e245d8de85b6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.591647 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlztk\" (UniqueName: \"kubernetes.io/projected/0e2fd991-68ff-45d8-bc15-e245d8de85b6-kube-api-access-rlztk\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.591659 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e2fd991-68ff-45d8-bc15-e245d8de85b6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:04 crc kubenswrapper[4675]: I0124 07:30:04.098360 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" event={"ID":"0e2fd991-68ff-45d8-bc15-e245d8de85b6","Type":"ContainerDied","Data":"5007302a9d841f599454539ac30af2b2e22d273022c50491f661165afa9ea924"} Jan 24 07:30:04 crc kubenswrapper[4675]: I0124 07:30:04.098401 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5007302a9d841f599454539ac30af2b2e22d273022c50491f661165afa9ea924" Jan 24 07:30:04 crc kubenswrapper[4675]: I0124 07:30:04.098437 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:04 crc kubenswrapper[4675]: I0124 07:30:04.499036 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59"] Jan 24 07:30:04 crc kubenswrapper[4675]: I0124 07:30:04.506154 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59"] Jan 24 07:30:04 crc kubenswrapper[4675]: I0124 07:30:04.953534 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b4201e4-a1e0-4256-aa5a-67383ee87bee" path="/var/lib/kubelet/pods/0b4201e4-a1e0-4256-aa5a-67383ee87bee/volumes" Jan 24 07:30:08 crc kubenswrapper[4675]: I0124 07:30:08.630047 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:30:08 crc kubenswrapper[4675]: I0124 07:30:08.630126 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:30:09 crc kubenswrapper[4675]: I0124 07:30:09.142598 4675 generic.go:334] "Generic (PLEG): container finished" podID="3e407880-d27a-4aa2-bb81-a87bb20ffcf1" containerID="b05aacdca52badee5e9189ade6e71c65277d2313178c6e5ed17f772dd8c61fe9" exitCode=0 Jan 24 07:30:09 crc kubenswrapper[4675]: I0124 07:30:09.142648 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" event={"ID":"3e407880-d27a-4aa2-bb81-a87bb20ffcf1","Type":"ContainerDied","Data":"b05aacdca52badee5e9189ade6e71c65277d2313178c6e5ed17f772dd8c61fe9"} Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.614695 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.734574 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-inventory\") pod \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.734881 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2chfb\" (UniqueName: \"kubernetes.io/projected/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-kube-api-access-2chfb\") pod \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.734918 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ssh-key-openstack-edpm-ipam\") pod \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.734991 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovn-combined-ca-bundle\") pod \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.735096 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovncontroller-config-0\") pod \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.743063 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-kube-api-access-2chfb" (OuterVolumeSpecName: "kube-api-access-2chfb") pod "3e407880-d27a-4aa2-bb81-a87bb20ffcf1" (UID: "3e407880-d27a-4aa2-bb81-a87bb20ffcf1"). InnerVolumeSpecName "kube-api-access-2chfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.763588 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3e407880-d27a-4aa2-bb81-a87bb20ffcf1" (UID: "3e407880-d27a-4aa2-bb81-a87bb20ffcf1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.772422 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "3e407880-d27a-4aa2-bb81-a87bb20ffcf1" (UID: "3e407880-d27a-4aa2-bb81-a87bb20ffcf1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.775861 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3e407880-d27a-4aa2-bb81-a87bb20ffcf1" (UID: "3e407880-d27a-4aa2-bb81-a87bb20ffcf1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.776091 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-inventory" (OuterVolumeSpecName: "inventory") pod "3e407880-d27a-4aa2-bb81-a87bb20ffcf1" (UID: "3e407880-d27a-4aa2-bb81-a87bb20ffcf1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.837685 4675 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.837741 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.837753 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2chfb\" (UniqueName: \"kubernetes.io/projected/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-kube-api-access-2chfb\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.837762 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.837771 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.162892 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" event={"ID":"3e407880-d27a-4aa2-bb81-a87bb20ffcf1","Type":"ContainerDied","Data":"ddb9f54d74c2e9997a2e10177f7d4e869a6f79eac0c430b061da4c9675fb428b"} Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.163433 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddb9f54d74c2e9997a2e10177f7d4e869a6f79eac0c430b061da4c9675fb428b" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.162947 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.329252 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g"] Jan 24 07:30:11 crc kubenswrapper[4675]: E0124 07:30:11.329823 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2fd991-68ff-45d8-bc15-e245d8de85b6" containerName="collect-profiles" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.329848 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2fd991-68ff-45d8-bc15-e245d8de85b6" containerName="collect-profiles" Jan 24 07:30:11 crc kubenswrapper[4675]: E0124 07:30:11.329875 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e407880-d27a-4aa2-bb81-a87bb20ffcf1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.329885 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e407880-d27a-4aa2-bb81-a87bb20ffcf1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.330112 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e407880-d27a-4aa2-bb81-a87bb20ffcf1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.330152 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e2fd991-68ff-45d8-bc15-e245d8de85b6" containerName="collect-profiles" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.330918 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.339957 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.340052 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.339971 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.340227 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.343310 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.343586 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.353255 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g"] Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.452680 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.452785 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.452832 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.452867 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7qcm\" (UniqueName: \"kubernetes.io/projected/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-kube-api-access-c7qcm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.452913 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.452931 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.554483 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.554589 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.554644 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7qcm\" (UniqueName: \"kubernetes.io/projected/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-kube-api-access-c7qcm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.554744 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.554773 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.554843 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.564340 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.565204 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.565472 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.583403 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.586983 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.591031 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7qcm\" (UniqueName: \"kubernetes.io/projected/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-kube-api-access-c7qcm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.658975 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:12 crc kubenswrapper[4675]: I0124 07:30:12.205588 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g"] Jan 24 07:30:13 crc kubenswrapper[4675]: I0124 07:30:13.182946 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" event={"ID":"388e10c7-15e4-40d5-94ed-5c6612f7fbfe","Type":"ContainerStarted","Data":"b5e680060e34c52cdc6b24399e8e4a401f9386d221e4f141b5df1377a0d9a3c0"} Jan 24 07:30:13 crc kubenswrapper[4675]: I0124 07:30:13.183369 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" event={"ID":"388e10c7-15e4-40d5-94ed-5c6612f7fbfe","Type":"ContainerStarted","Data":"99dfba729f8f540b1cab59f4ba1e3806fcedb3caf6cfa61b89f8e3387a1f3f2b"} Jan 24 07:30:13 crc kubenswrapper[4675]: I0124 07:30:13.206157 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" podStartSLOduration=1.7352952529999999 podStartE2EDuration="2.206138603s" podCreationTimestamp="2026-01-24 07:30:11 +0000 UTC" firstStartedPulling="2026-01-24 07:30:12.229788861 +0000 UTC m=+2213.525894084" lastFinishedPulling="2026-01-24 07:30:12.700632211 +0000 UTC m=+2213.996737434" observedRunningTime="2026-01-24 07:30:13.197648547 +0000 UTC m=+2214.493753780" watchObservedRunningTime="2026-01-24 07:30:13.206138603 +0000 UTC m=+2214.502243826" Jan 24 07:30:38 crc kubenswrapper[4675]: I0124 07:30:38.630048 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:30:38 crc kubenswrapper[4675]: I0124 07:30:38.630598 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:30:38 crc kubenswrapper[4675]: I0124 07:30:38.630646 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:30:38 crc kubenswrapper[4675]: I0124 07:30:38.631528 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:30:38 crc kubenswrapper[4675]: I0124 07:30:38.631586 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" gracePeriod=600 Jan 24 07:30:38 crc kubenswrapper[4675]: E0124 07:30:38.753054 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:30:39 crc kubenswrapper[4675]: I0124 07:30:39.435840 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" exitCode=0 Jan 24 07:30:39 crc kubenswrapper[4675]: I0124 07:30:39.435884 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71"} Jan 24 07:30:39 crc kubenswrapper[4675]: I0124 07:30:39.435915 4675 scope.go:117] "RemoveContainer" containerID="a1c9273bc1d397c7b2b3e725108610e2ba92ba82858b4b6dca89da8ddff34bf2" Jan 24 07:30:39 crc kubenswrapper[4675]: I0124 07:30:39.436635 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:30:39 crc kubenswrapper[4675]: E0124 07:30:39.436938 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:30:41 crc kubenswrapper[4675]: I0124 07:30:41.278076 4675 scope.go:117] "RemoveContainer" containerID="a7c88f78a0b2d3479a858654ffc24e4044f89c1ce4d62775bbcc5f9d5bd1b775" Jan 24 07:30:54 crc kubenswrapper[4675]: I0124 07:30:54.942380 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:30:54 crc kubenswrapper[4675]: E0124 07:30:54.943115 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:31:07 crc kubenswrapper[4675]: I0124 07:31:07.943097 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:31:07 crc kubenswrapper[4675]: E0124 07:31:07.943848 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:31:16 crc kubenswrapper[4675]: I0124 07:31:16.199044 4675 generic.go:334] "Generic (PLEG): container finished" podID="388e10c7-15e4-40d5-94ed-5c6612f7fbfe" containerID="b5e680060e34c52cdc6b24399e8e4a401f9386d221e4f141b5df1377a0d9a3c0" exitCode=0 Jan 24 07:31:16 crc kubenswrapper[4675]: I0124 07:31:16.199551 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" event={"ID":"388e10c7-15e4-40d5-94ed-5c6612f7fbfe","Type":"ContainerDied","Data":"b5e680060e34c52cdc6b24399e8e4a401f9386d221e4f141b5df1377a0d9a3c0"} Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.637293 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.726041 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-inventory\") pod \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.726092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-ssh-key-openstack-edpm-ipam\") pod \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.726160 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-metadata-combined-ca-bundle\") pod \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.726215 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-nova-metadata-neutron-config-0\") pod \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.726256 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.726342 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7qcm\" (UniqueName: \"kubernetes.io/projected/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-kube-api-access-c7qcm\") pod \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.734236 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "388e10c7-15e4-40d5-94ed-5c6612f7fbfe" (UID: "388e10c7-15e4-40d5-94ed-5c6612f7fbfe"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.734302 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-kube-api-access-c7qcm" (OuterVolumeSpecName: "kube-api-access-c7qcm") pod "388e10c7-15e4-40d5-94ed-5c6612f7fbfe" (UID: "388e10c7-15e4-40d5-94ed-5c6612f7fbfe"). InnerVolumeSpecName "kube-api-access-c7qcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.756607 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "388e10c7-15e4-40d5-94ed-5c6612f7fbfe" (UID: "388e10c7-15e4-40d5-94ed-5c6612f7fbfe"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.758000 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-inventory" (OuterVolumeSpecName: "inventory") pod "388e10c7-15e4-40d5-94ed-5c6612f7fbfe" (UID: "388e10c7-15e4-40d5-94ed-5c6612f7fbfe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.758083 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "388e10c7-15e4-40d5-94ed-5c6612f7fbfe" (UID: "388e10c7-15e4-40d5-94ed-5c6612f7fbfe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.759054 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "388e10c7-15e4-40d5-94ed-5c6612f7fbfe" (UID: "388e10c7-15e4-40d5-94ed-5c6612f7fbfe"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.829415 4675 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.829557 4675 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.829573 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7qcm\" (UniqueName: \"kubernetes.io/projected/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-kube-api-access-c7qcm\") on node \"crc\" DevicePath \"\"" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.829584 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.829593 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.829601 4675 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.219274 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" event={"ID":"388e10c7-15e4-40d5-94ed-5c6612f7fbfe","Type":"ContainerDied","Data":"99dfba729f8f540b1cab59f4ba1e3806fcedb3caf6cfa61b89f8e3387a1f3f2b"} Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.219327 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99dfba729f8f540b1cab59f4ba1e3806fcedb3caf6cfa61b89f8e3387a1f3f2b" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.219328 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.370345 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq"] Jan 24 07:31:18 crc kubenswrapper[4675]: E0124 07:31:18.371314 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388e10c7-15e4-40d5-94ed-5c6612f7fbfe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.371339 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="388e10c7-15e4-40d5-94ed-5c6612f7fbfe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.371597 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="388e10c7-15e4-40d5-94ed-5c6612f7fbfe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.372423 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.374823 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.377548 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.380322 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.380405 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.380600 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.394324 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq"] Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.441782 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.441876 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.441911 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.442232 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7l7s\" (UniqueName: \"kubernetes.io/projected/d457c71e-ef41-4bf9-a59b-b3221df26b41-kube-api-access-t7l7s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.442448 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.550174 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.550479 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7l7s\" (UniqueName: \"kubernetes.io/projected/d457c71e-ef41-4bf9-a59b-b3221df26b41-kube-api-access-t7l7s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.550624 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.551064 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.551199 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.556588 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.556896 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.560371 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.566442 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.569263 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7l7s\" (UniqueName: \"kubernetes.io/projected/d457c71e-ef41-4bf9-a59b-b3221df26b41-kube-api-access-t7l7s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.714044 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:19 crc kubenswrapper[4675]: I0124 07:31:19.255954 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq"] Jan 24 07:31:20 crc kubenswrapper[4675]: I0124 07:31:20.009134 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:31:20 crc kubenswrapper[4675]: I0124 07:31:20.239407 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" event={"ID":"d457c71e-ef41-4bf9-a59b-b3221df26b41","Type":"ContainerStarted","Data":"8264419fbcac8d097619ac8c8a3c44cfde990740dcabe45435c29debb765207d"} Jan 24 07:31:21 crc kubenswrapper[4675]: I0124 07:31:21.248810 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" event={"ID":"d457c71e-ef41-4bf9-a59b-b3221df26b41","Type":"ContainerStarted","Data":"3cc600e74f559f08c6e306f068e4792c472d7b6a953a06f392eff3d56a90133e"} Jan 24 07:31:21 crc kubenswrapper[4675]: I0124 07:31:21.277460 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" podStartSLOduration=2.5452895509999998 podStartE2EDuration="3.277444476s" podCreationTimestamp="2026-01-24 07:31:18 +0000 UTC" firstStartedPulling="2026-01-24 07:31:19.27448369 +0000 UTC m=+2280.570588913" lastFinishedPulling="2026-01-24 07:31:20.006638595 +0000 UTC m=+2281.302743838" observedRunningTime="2026-01-24 07:31:21.272198468 +0000 UTC m=+2282.568303691" watchObservedRunningTime="2026-01-24 07:31:21.277444476 +0000 UTC m=+2282.573549699" Jan 24 07:31:21 crc kubenswrapper[4675]: I0124 07:31:21.943102 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:31:21 crc kubenswrapper[4675]: E0124 07:31:21.943816 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:31:34 crc kubenswrapper[4675]: I0124 07:31:34.942619 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:31:34 crc kubenswrapper[4675]: E0124 07:31:34.943571 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:31:46 crc kubenswrapper[4675]: I0124 07:31:46.943270 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:31:46 crc kubenswrapper[4675]: E0124 07:31:46.944099 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:31:59 crc kubenswrapper[4675]: I0124 07:31:59.942185 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:31:59 crc kubenswrapper[4675]: E0124 07:31:59.943069 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:32:11 crc kubenswrapper[4675]: I0124 07:32:11.943081 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:32:11 crc kubenswrapper[4675]: E0124 07:32:11.944159 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:32:26 crc kubenswrapper[4675]: I0124 07:32:26.942902 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:32:26 crc kubenswrapper[4675]: E0124 07:32:26.943763 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:32:38 crc kubenswrapper[4675]: I0124 07:32:38.949862 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:32:38 crc kubenswrapper[4675]: E0124 07:32:38.950537 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:32:53 crc kubenswrapper[4675]: I0124 07:32:53.942178 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:32:53 crc kubenswrapper[4675]: E0124 07:32:53.942853 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:33:07 crc kubenswrapper[4675]: I0124 07:33:07.943438 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:33:07 crc kubenswrapper[4675]: E0124 07:33:07.944766 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:33:22 crc kubenswrapper[4675]: I0124 07:33:22.943594 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:33:22 crc kubenswrapper[4675]: E0124 07:33:22.944939 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.722866 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-txpwk"] Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.725280 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.737254 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txpwk"] Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.780132 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-catalog-content\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.780199 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-utilities\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.780356 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnwkc\" (UniqueName: \"kubernetes.io/projected/5ab038a6-becf-4e29-9a38-9a92e2e7df69-kube-api-access-xnwkc\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.882118 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnwkc\" (UniqueName: \"kubernetes.io/projected/5ab038a6-becf-4e29-9a38-9a92e2e7df69-kube-api-access-xnwkc\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.882515 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-catalog-content\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.882564 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-utilities\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.883103 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-catalog-content\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.883172 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-utilities\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.901103 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnwkc\" (UniqueName: \"kubernetes.io/projected/5ab038a6-becf-4e29-9a38-9a92e2e7df69-kube-api-access-xnwkc\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:32 crc kubenswrapper[4675]: I0124 07:33:32.100295 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:32 crc kubenswrapper[4675]: I0124 07:33:32.581947 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txpwk"] Jan 24 07:33:33 crc kubenswrapper[4675]: I0124 07:33:33.565693 4675 generic.go:334] "Generic (PLEG): container finished" podID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerID="2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478" exitCode=0 Jan 24 07:33:33 crc kubenswrapper[4675]: I0124 07:33:33.566362 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerDied","Data":"2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478"} Jan 24 07:33:33 crc kubenswrapper[4675]: I0124 07:33:33.566419 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerStarted","Data":"437f6576151dec232bbda2f3fa25c23c1adeb0d977d51222f8641a5e83331e27"} Jan 24 07:33:33 crc kubenswrapper[4675]: I0124 07:33:33.569220 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:33:34 crc kubenswrapper[4675]: I0124 07:33:34.576248 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerStarted","Data":"c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1"} Jan 24 07:33:34 crc kubenswrapper[4675]: I0124 07:33:34.944036 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:33:34 crc kubenswrapper[4675]: E0124 07:33:34.944572 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:33:38 crc kubenswrapper[4675]: I0124 07:33:38.614236 4675 generic.go:334] "Generic (PLEG): container finished" podID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerID="c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1" exitCode=0 Jan 24 07:33:38 crc kubenswrapper[4675]: I0124 07:33:38.614622 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerDied","Data":"c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1"} Jan 24 07:33:39 crc kubenswrapper[4675]: I0124 07:33:39.626803 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerStarted","Data":"adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5"} Jan 24 07:33:39 crc kubenswrapper[4675]: I0124 07:33:39.651705 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-txpwk" podStartSLOduration=3.22403018 podStartE2EDuration="8.651686799s" podCreationTimestamp="2026-01-24 07:33:31 +0000 UTC" firstStartedPulling="2026-01-24 07:33:33.569003323 +0000 UTC m=+2414.865108546" lastFinishedPulling="2026-01-24 07:33:38.996659932 +0000 UTC m=+2420.292765165" observedRunningTime="2026-01-24 07:33:39.643332296 +0000 UTC m=+2420.939437529" watchObservedRunningTime="2026-01-24 07:33:39.651686799 +0000 UTC m=+2420.947792022" Jan 24 07:33:42 crc kubenswrapper[4675]: I0124 07:33:42.100842 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:42 crc kubenswrapper[4675]: I0124 07:33:42.101765 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:43 crc kubenswrapper[4675]: I0124 07:33:43.153381 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-txpwk" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="registry-server" probeResult="failure" output=< Jan 24 07:33:43 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:33:43 crc kubenswrapper[4675]: > Jan 24 07:33:46 crc kubenswrapper[4675]: I0124 07:33:46.943564 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:33:46 crc kubenswrapper[4675]: E0124 07:33:46.944066 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:33:52 crc kubenswrapper[4675]: I0124 07:33:52.157237 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:52 crc kubenswrapper[4675]: I0124 07:33:52.221439 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:52 crc kubenswrapper[4675]: I0124 07:33:52.393264 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txpwk"] Jan 24 07:33:53 crc kubenswrapper[4675]: I0124 07:33:53.770029 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-txpwk" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="registry-server" containerID="cri-o://adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5" gracePeriod=2 Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.205428 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.226861 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-catalog-content\") pod \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.226995 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-utilities\") pod \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.227059 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnwkc\" (UniqueName: \"kubernetes.io/projected/5ab038a6-becf-4e29-9a38-9a92e2e7df69-kube-api-access-xnwkc\") pod \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.227774 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-utilities" (OuterVolumeSpecName: "utilities") pod "5ab038a6-becf-4e29-9a38-9a92e2e7df69" (UID: "5ab038a6-becf-4e29-9a38-9a92e2e7df69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.237417 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab038a6-becf-4e29-9a38-9a92e2e7df69-kube-api-access-xnwkc" (OuterVolumeSpecName: "kube-api-access-xnwkc") pod "5ab038a6-becf-4e29-9a38-9a92e2e7df69" (UID: "5ab038a6-becf-4e29-9a38-9a92e2e7df69"). InnerVolumeSpecName "kube-api-access-xnwkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.329916 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.330251 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnwkc\" (UniqueName: \"kubernetes.io/projected/5ab038a6-becf-4e29-9a38-9a92e2e7df69-kube-api-access-xnwkc\") on node \"crc\" DevicePath \"\"" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.343216 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ab038a6-becf-4e29-9a38-9a92e2e7df69" (UID: "5ab038a6-becf-4e29-9a38-9a92e2e7df69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.432422 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.778983 4675 generic.go:334] "Generic (PLEG): container finished" podID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerID="adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5" exitCode=0 Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.779020 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerDied","Data":"adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5"} Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.779053 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerDied","Data":"437f6576151dec232bbda2f3fa25c23c1adeb0d977d51222f8641a5e83331e27"} Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.779062 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.779072 4675 scope.go:117] "RemoveContainer" containerID="adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.800685 4675 scope.go:117] "RemoveContainer" containerID="c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.829102 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txpwk"] Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.832727 4675 scope.go:117] "RemoveContainer" containerID="2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.842100 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-txpwk"] Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.883589 4675 scope.go:117] "RemoveContainer" containerID="adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5" Jan 24 07:33:54 crc kubenswrapper[4675]: E0124 07:33:54.884272 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5\": container with ID starting with adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5 not found: ID does not exist" containerID="adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.884324 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5"} err="failed to get container status \"adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5\": rpc error: code = NotFound desc = could not find container \"adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5\": container with ID starting with adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5 not found: ID does not exist" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.884380 4675 scope.go:117] "RemoveContainer" containerID="c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1" Jan 24 07:33:54 crc kubenswrapper[4675]: E0124 07:33:54.884942 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1\": container with ID starting with c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1 not found: ID does not exist" containerID="c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.885012 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1"} err="failed to get container status \"c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1\": rpc error: code = NotFound desc = could not find container \"c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1\": container with ID starting with c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1 not found: ID does not exist" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.885049 4675 scope.go:117] "RemoveContainer" containerID="2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478" Jan 24 07:33:54 crc kubenswrapper[4675]: E0124 07:33:54.885488 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478\": container with ID starting with 2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478 not found: ID does not exist" containerID="2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.885538 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478"} err="failed to get container status \"2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478\": rpc error: code = NotFound desc = could not find container \"2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478\": container with ID starting with 2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478 not found: ID does not exist" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.959698 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" path="/var/lib/kubelet/pods/5ab038a6-becf-4e29-9a38-9a92e2e7df69/volumes" Jan 24 07:33:58 crc kubenswrapper[4675]: I0124 07:33:58.948360 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:33:58 crc kubenswrapper[4675]: E0124 07:33:58.949295 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:34:10 crc kubenswrapper[4675]: I0124 07:34:10.943364 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:34:10 crc kubenswrapper[4675]: E0124 07:34:10.944179 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:34:22 crc kubenswrapper[4675]: I0124 07:34:22.944535 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:34:22 crc kubenswrapper[4675]: E0124 07:34:22.945326 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:34:35 crc kubenswrapper[4675]: I0124 07:34:35.942609 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:34:35 crc kubenswrapper[4675]: E0124 07:34:35.943248 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:34:50 crc kubenswrapper[4675]: I0124 07:34:50.945668 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:34:50 crc kubenswrapper[4675]: E0124 07:34:50.946500 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:35:04 crc kubenswrapper[4675]: I0124 07:35:04.948701 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:35:04 crc kubenswrapper[4675]: E0124 07:35:04.950473 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:35:18 crc kubenswrapper[4675]: I0124 07:35:18.949383 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:35:18 crc kubenswrapper[4675]: E0124 07:35:18.951159 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:35:32 crc kubenswrapper[4675]: I0124 07:35:32.942875 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:35:32 crc kubenswrapper[4675]: E0124 07:35:32.943889 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:35:46 crc kubenswrapper[4675]: I0124 07:35:46.942817 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:35:48 crc kubenswrapper[4675]: I0124 07:35:48.012472 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"d342d159e0fcea37d58e62d2f8a5fb00148cb6bd39bdf7f545a9067214bc08b4"} Jan 24 07:37:44 crc kubenswrapper[4675]: I0124 07:37:44.070969 4675 generic.go:334] "Generic (PLEG): container finished" podID="d457c71e-ef41-4bf9-a59b-b3221df26b41" containerID="3cc600e74f559f08c6e306f068e4792c472d7b6a953a06f392eff3d56a90133e" exitCode=0 Jan 24 07:37:44 crc kubenswrapper[4675]: I0124 07:37:44.071051 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" event={"ID":"d457c71e-ef41-4bf9-a59b-b3221df26b41","Type":"ContainerDied","Data":"3cc600e74f559f08c6e306f068e4792c472d7b6a953a06f392eff3d56a90133e"} Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.573554 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.598458 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7l7s\" (UniqueName: \"kubernetes.io/projected/d457c71e-ef41-4bf9-a59b-b3221df26b41-kube-api-access-t7l7s\") pod \"d457c71e-ef41-4bf9-a59b-b3221df26b41\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.598602 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-secret-0\") pod \"d457c71e-ef41-4bf9-a59b-b3221df26b41\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.598630 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-combined-ca-bundle\") pod \"d457c71e-ef41-4bf9-a59b-b3221df26b41\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.598673 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-inventory\") pod \"d457c71e-ef41-4bf9-a59b-b3221df26b41\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.598797 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-ssh-key-openstack-edpm-ipam\") pod \"d457c71e-ef41-4bf9-a59b-b3221df26b41\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.604602 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d457c71e-ef41-4bf9-a59b-b3221df26b41-kube-api-access-t7l7s" (OuterVolumeSpecName: "kube-api-access-t7l7s") pod "d457c71e-ef41-4bf9-a59b-b3221df26b41" (UID: "d457c71e-ef41-4bf9-a59b-b3221df26b41"). InnerVolumeSpecName "kube-api-access-t7l7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.620740 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d457c71e-ef41-4bf9-a59b-b3221df26b41" (UID: "d457c71e-ef41-4bf9-a59b-b3221df26b41"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.639495 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-inventory" (OuterVolumeSpecName: "inventory") pod "d457c71e-ef41-4bf9-a59b-b3221df26b41" (UID: "d457c71e-ef41-4bf9-a59b-b3221df26b41"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.647061 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d457c71e-ef41-4bf9-a59b-b3221df26b41" (UID: "d457c71e-ef41-4bf9-a59b-b3221df26b41"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.649507 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d457c71e-ef41-4bf9-a59b-b3221df26b41" (UID: "d457c71e-ef41-4bf9-a59b-b3221df26b41"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.700879 4675 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.700907 4675 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.700917 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.700925 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.700934 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7l7s\" (UniqueName: \"kubernetes.io/projected/d457c71e-ef41-4bf9-a59b-b3221df26b41-kube-api-access-t7l7s\") on node \"crc\" DevicePath \"\"" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.097529 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" event={"ID":"d457c71e-ef41-4bf9-a59b-b3221df26b41","Type":"ContainerDied","Data":"8264419fbcac8d097619ac8c8a3c44cfde990740dcabe45435c29debb765207d"} Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.097564 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8264419fbcac8d097619ac8c8a3c44cfde990740dcabe45435c29debb765207d" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.097648 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.203895 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng"] Jan 24 07:37:46 crc kubenswrapper[4675]: E0124 07:37:46.204316 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d457c71e-ef41-4bf9-a59b-b3221df26b41" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.204339 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d457c71e-ef41-4bf9-a59b-b3221df26b41" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 24 07:37:46 crc kubenswrapper[4675]: E0124 07:37:46.204353 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="registry-server" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.204365 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="registry-server" Jan 24 07:37:46 crc kubenswrapper[4675]: E0124 07:37:46.204404 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="extract-utilities" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.204414 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="extract-utilities" Jan 24 07:37:46 crc kubenswrapper[4675]: E0124 07:37:46.204448 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="extract-content" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.204459 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="extract-content" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.204760 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d457c71e-ef41-4bf9-a59b-b3221df26b41" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.204795 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="registry-server" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.205710 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.209111 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.209448 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.209466 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.209490 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.209522 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.210087 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.210170 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.219686 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng"] Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.311665 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312048 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312243 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312402 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbb69\" (UniqueName: \"kubernetes.io/projected/f4024f70-df50-442c-bcd5-c599d978277c-kube-api-access-xbb69\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312548 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312673 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312819 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312947 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4024f70-df50-442c-bcd5-c599d978277c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.313088 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415195 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415264 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415357 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415432 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbb69\" (UniqueName: \"kubernetes.io/projected/f4024f70-df50-442c-bcd5-c599d978277c-kube-api-access-xbb69\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415492 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415522 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415553 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415588 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4024f70-df50-442c-bcd5-c599d978277c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415644 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.418953 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4024f70-df50-442c-bcd5-c599d978277c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.420893 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.430408 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.430408 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.430701 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.430998 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.431439 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.431543 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.436768 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbb69\" (UniqueName: \"kubernetes.io/projected/f4024f70-df50-442c-bcd5-c599d978277c-kube-api-access-xbb69\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.528211 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:47 crc kubenswrapper[4675]: I0124 07:37:47.135477 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng"] Jan 24 07:37:48 crc kubenswrapper[4675]: I0124 07:37:48.135419 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" event={"ID":"f4024f70-df50-442c-bcd5-c599d978277c","Type":"ContainerStarted","Data":"2e2ca50b9a099a17d75a3291c986ddd856869d868abc722788811a80f95d193b"} Jan 24 07:37:49 crc kubenswrapper[4675]: I0124 07:37:49.146193 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" event={"ID":"f4024f70-df50-442c-bcd5-c599d978277c","Type":"ContainerStarted","Data":"c0366ed663d9377d6d87bee264a658d87b4c1d06b5f76d0f3bcdc27e1803092b"} Jan 24 07:37:49 crc kubenswrapper[4675]: I0124 07:37:49.192332 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" podStartSLOduration=2.368853603 podStartE2EDuration="3.192314405s" podCreationTimestamp="2026-01-24 07:37:46 +0000 UTC" firstStartedPulling="2026-01-24 07:37:47.148028484 +0000 UTC m=+2668.444133707" lastFinishedPulling="2026-01-24 07:37:47.971489266 +0000 UTC m=+2669.267594509" observedRunningTime="2026-01-24 07:37:49.189959758 +0000 UTC m=+2670.486064981" watchObservedRunningTime="2026-01-24 07:37:49.192314405 +0000 UTC m=+2670.488419628" Jan 24 07:38:08 crc kubenswrapper[4675]: I0124 07:38:08.629928 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:38:08 crc kubenswrapper[4675]: I0124 07:38:08.630502 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:38:38 crc kubenswrapper[4675]: I0124 07:38:38.630554 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:38:38 crc kubenswrapper[4675]: I0124 07:38:38.631638 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.630069 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.630676 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.630748 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.631526 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d342d159e0fcea37d58e62d2f8a5fb00148cb6bd39bdf7f545a9067214bc08b4"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.631580 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://d342d159e0fcea37d58e62d2f8a5fb00148cb6bd39bdf7f545a9067214bc08b4" gracePeriod=600 Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.867439 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="d342d159e0fcea37d58e62d2f8a5fb00148cb6bd39bdf7f545a9067214bc08b4" exitCode=0 Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.867481 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"d342d159e0fcea37d58e62d2f8a5fb00148cb6bd39bdf7f545a9067214bc08b4"} Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.867516 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:39:09 crc kubenswrapper[4675]: I0124 07:39:09.880843 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd"} Jan 24 07:39:57 crc kubenswrapper[4675]: I0124 07:39:57.859433 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-77wsd"] Jan 24 07:39:57 crc kubenswrapper[4675]: I0124 07:39:57.861641 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:57 crc kubenswrapper[4675]: I0124 07:39:57.868007 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-77wsd"] Jan 24 07:39:57 crc kubenswrapper[4675]: I0124 07:39:57.941637 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-catalog-content\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:57 crc kubenswrapper[4675]: I0124 07:39:57.941975 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4248w\" (UniqueName: \"kubernetes.io/projected/8ccf64a6-f29e-4977-84d5-597321d0aa40-kube-api-access-4248w\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:57 crc kubenswrapper[4675]: I0124 07:39:57.942251 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-utilities\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.044306 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4248w\" (UniqueName: \"kubernetes.io/projected/8ccf64a6-f29e-4977-84d5-597321d0aa40-kube-api-access-4248w\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.044413 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-utilities\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.044517 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-catalog-content\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.045273 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-utilities\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.045328 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-catalog-content\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.066395 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4248w\" (UniqueName: \"kubernetes.io/projected/8ccf64a6-f29e-4977-84d5-597321d0aa40-kube-api-access-4248w\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.193283 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.712442 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-77wsd"] Jan 24 07:39:59 crc kubenswrapper[4675]: I0124 07:39:59.294181 4675 generic.go:334] "Generic (PLEG): container finished" podID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerID="1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687" exitCode=0 Jan 24 07:39:59 crc kubenswrapper[4675]: I0124 07:39:59.294330 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerDied","Data":"1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687"} Jan 24 07:39:59 crc kubenswrapper[4675]: I0124 07:39:59.294480 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerStarted","Data":"1267f1241a6b5e2d15d5b790f6e8a19388ac394099a6d4aa48d1de2bd8be8847"} Jan 24 07:39:59 crc kubenswrapper[4675]: I0124 07:39:59.297349 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.228543 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z56sj"] Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.230626 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.250173 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z56sj"] Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.291822 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-utilities\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.291956 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cb4f\" (UniqueName: \"kubernetes.io/projected/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-kube-api-access-9cb4f\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.291980 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-catalog-content\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.303610 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerStarted","Data":"743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430"} Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.393611 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cb4f\" (UniqueName: \"kubernetes.io/projected/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-kube-api-access-9cb4f\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.393675 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-catalog-content\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.393793 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-utilities\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.394142 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-catalog-content\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.394225 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-utilities\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.425195 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cb4f\" (UniqueName: \"kubernetes.io/projected/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-kube-api-access-9cb4f\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.547948 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:02 crc kubenswrapper[4675]: I0124 07:40:02.835980 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z56sj"] Jan 24 07:40:02 crc kubenswrapper[4675]: W0124 07:40:02.849940 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ca4082_bd83_4643_bbe5_a41ea26c4ce9.slice/crio-a86660e41ccf58e2de6f41f98cfba77ba9209e0617bdb6eb6ed76595d8f8e9a6 WatchSource:0}: Error finding container a86660e41ccf58e2de6f41f98cfba77ba9209e0617bdb6eb6ed76595d8f8e9a6: Status 404 returned error can't find the container with id a86660e41ccf58e2de6f41f98cfba77ba9209e0617bdb6eb6ed76595d8f8e9a6 Jan 24 07:40:03 crc kubenswrapper[4675]: I0124 07:40:03.339182 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerStarted","Data":"a86660e41ccf58e2de6f41f98cfba77ba9209e0617bdb6eb6ed76595d8f8e9a6"} Jan 24 07:40:03 crc kubenswrapper[4675]: I0124 07:40:03.343340 4675 generic.go:334] "Generic (PLEG): container finished" podID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerID="743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430" exitCode=0 Jan 24 07:40:03 crc kubenswrapper[4675]: I0124 07:40:03.343385 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerDied","Data":"743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430"} Jan 24 07:40:04 crc kubenswrapper[4675]: I0124 07:40:04.354547 4675 generic.go:334] "Generic (PLEG): container finished" podID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerID="4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4" exitCode=0 Jan 24 07:40:04 crc kubenswrapper[4675]: I0124 07:40:04.354841 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerDied","Data":"4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4"} Jan 24 07:40:05 crc kubenswrapper[4675]: I0124 07:40:05.366398 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerStarted","Data":"68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b"} Jan 24 07:40:05 crc kubenswrapper[4675]: I0124 07:40:05.369085 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerStarted","Data":"ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff"} Jan 24 07:40:05 crc kubenswrapper[4675]: I0124 07:40:05.422635 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-77wsd" podStartSLOduration=3.156957534 podStartE2EDuration="8.422615352s" podCreationTimestamp="2026-01-24 07:39:57 +0000 UTC" firstStartedPulling="2026-01-24 07:39:59.297033126 +0000 UTC m=+2800.593138349" lastFinishedPulling="2026-01-24 07:40:04.562690934 +0000 UTC m=+2805.858796167" observedRunningTime="2026-01-24 07:40:05.418891022 +0000 UTC m=+2806.714996255" watchObservedRunningTime="2026-01-24 07:40:05.422615352 +0000 UTC m=+2806.718720595" Jan 24 07:40:08 crc kubenswrapper[4675]: I0124 07:40:08.194023 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:40:08 crc kubenswrapper[4675]: I0124 07:40:08.194527 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:40:08 crc kubenswrapper[4675]: I0124 07:40:08.406201 4675 generic.go:334] "Generic (PLEG): container finished" podID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerID="68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b" exitCode=0 Jan 24 07:40:08 crc kubenswrapper[4675]: I0124 07:40:08.406276 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerDied","Data":"68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b"} Jan 24 07:40:09 crc kubenswrapper[4675]: I0124 07:40:09.246993 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-77wsd" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="registry-server" probeResult="failure" output=< Jan 24 07:40:09 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:40:09 crc kubenswrapper[4675]: > Jan 24 07:40:10 crc kubenswrapper[4675]: I0124 07:40:10.426252 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerStarted","Data":"985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b"} Jan 24 07:40:10 crc kubenswrapper[4675]: I0124 07:40:10.447790 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z56sj" podStartSLOduration=5.553043548 podStartE2EDuration="10.447769382s" podCreationTimestamp="2026-01-24 07:40:00 +0000 UTC" firstStartedPulling="2026-01-24 07:40:04.358089707 +0000 UTC m=+2805.654194960" lastFinishedPulling="2026-01-24 07:40:09.252815581 +0000 UTC m=+2810.548920794" observedRunningTime="2026-01-24 07:40:10.44520905 +0000 UTC m=+2811.741314273" watchObservedRunningTime="2026-01-24 07:40:10.447769382 +0000 UTC m=+2811.743874605" Jan 24 07:40:10 crc kubenswrapper[4675]: I0124 07:40:10.548535 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:10 crc kubenswrapper[4675]: I0124 07:40:10.548593 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:11 crc kubenswrapper[4675]: I0124 07:40:11.591427 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z56sj" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="registry-server" probeResult="failure" output=< Jan 24 07:40:11 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:40:11 crc kubenswrapper[4675]: > Jan 24 07:40:18 crc kubenswrapper[4675]: I0124 07:40:18.241955 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:40:18 crc kubenswrapper[4675]: I0124 07:40:18.297852 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:40:18 crc kubenswrapper[4675]: I0124 07:40:18.478003 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-77wsd"] Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.497860 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-77wsd" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="registry-server" containerID="cri-o://ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff" gracePeriod=2 Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.917050 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.988163 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4248w\" (UniqueName: \"kubernetes.io/projected/8ccf64a6-f29e-4977-84d5-597321d0aa40-kube-api-access-4248w\") pod \"8ccf64a6-f29e-4977-84d5-597321d0aa40\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.988242 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-utilities\") pod \"8ccf64a6-f29e-4977-84d5-597321d0aa40\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.988263 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-catalog-content\") pod \"8ccf64a6-f29e-4977-84d5-597321d0aa40\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.989428 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-utilities" (OuterVolumeSpecName: "utilities") pod "8ccf64a6-f29e-4977-84d5-597321d0aa40" (UID: "8ccf64a6-f29e-4977-84d5-597321d0aa40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.998193 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccf64a6-f29e-4977-84d5-597321d0aa40-kube-api-access-4248w" (OuterVolumeSpecName: "kube-api-access-4248w") pod "8ccf64a6-f29e-4977-84d5-597321d0aa40" (UID: "8ccf64a6-f29e-4977-84d5-597321d0aa40"). InnerVolumeSpecName "kube-api-access-4248w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.008548 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ccf64a6-f29e-4977-84d5-597321d0aa40" (UID: "8ccf64a6-f29e-4977-84d5-597321d0aa40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.090121 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4248w\" (UniqueName: \"kubernetes.io/projected/8ccf64a6-f29e-4977-84d5-597321d0aa40-kube-api-access-4248w\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.090163 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.090172 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.509808 4675 generic.go:334] "Generic (PLEG): container finished" podID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerID="ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff" exitCode=0 Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.509882 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.509886 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerDied","Data":"ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff"} Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.509982 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerDied","Data":"1267f1241a6b5e2d15d5b790f6e8a19388ac394099a6d4aa48d1de2bd8be8847"} Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.510009 4675 scope.go:117] "RemoveContainer" containerID="ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.560212 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-77wsd"] Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.568533 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-77wsd"] Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.569114 4675 scope.go:117] "RemoveContainer" containerID="743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.601072 4675 scope.go:117] "RemoveContainer" containerID="1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.607063 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.651766 4675 scope.go:117] "RemoveContainer" containerID="ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff" Jan 24 07:40:20 crc kubenswrapper[4675]: E0124 07:40:20.653275 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff\": container with ID starting with ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff not found: ID does not exist" containerID="ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.653315 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff"} err="failed to get container status \"ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff\": rpc error: code = NotFound desc = could not find container \"ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff\": container with ID starting with ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff not found: ID does not exist" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.653344 4675 scope.go:117] "RemoveContainer" containerID="743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430" Jan 24 07:40:20 crc kubenswrapper[4675]: E0124 07:40:20.653822 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430\": container with ID starting with 743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430 not found: ID does not exist" containerID="743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.653850 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430"} err="failed to get container status \"743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430\": rpc error: code = NotFound desc = could not find container \"743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430\": container with ID starting with 743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430 not found: ID does not exist" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.653864 4675 scope.go:117] "RemoveContainer" containerID="1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687" Jan 24 07:40:20 crc kubenswrapper[4675]: E0124 07:40:20.654163 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687\": container with ID starting with 1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687 not found: ID does not exist" containerID="1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.654191 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687"} err="failed to get container status \"1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687\": rpc error: code = NotFound desc = could not find container \"1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687\": container with ID starting with 1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687 not found: ID does not exist" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.657585 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.952605 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" path="/var/lib/kubelet/pods/8ccf64a6-f29e-4977-84d5-597321d0aa40/volumes" Jan 24 07:40:22 crc kubenswrapper[4675]: I0124 07:40:22.874617 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z56sj"] Jan 24 07:40:22 crc kubenswrapper[4675]: I0124 07:40:22.874874 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z56sj" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="registry-server" containerID="cri-o://985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b" gracePeriod=2 Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.315143 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.353498 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cb4f\" (UniqueName: \"kubernetes.io/projected/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-kube-api-access-9cb4f\") pod \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.353781 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-utilities\") pod \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.353838 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-catalog-content\") pod \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.354682 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-utilities" (OuterVolumeSpecName: "utilities") pod "f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" (UID: "f6ca4082-bd83-4643-bbe5-a41ea26c4ce9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.367863 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-kube-api-access-9cb4f" (OuterVolumeSpecName: "kube-api-access-9cb4f") pod "f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" (UID: "f6ca4082-bd83-4643-bbe5-a41ea26c4ce9"). InnerVolumeSpecName "kube-api-access-9cb4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.418240 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" (UID: "f6ca4082-bd83-4643-bbe5-a41ea26c4ce9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.456037 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.456073 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.456087 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cb4f\" (UniqueName: \"kubernetes.io/projected/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-kube-api-access-9cb4f\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.538492 4675 generic.go:334] "Generic (PLEG): container finished" podID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerID="985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b" exitCode=0 Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.538553 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerDied","Data":"985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b"} Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.538628 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerDied","Data":"a86660e41ccf58e2de6f41f98cfba77ba9209e0617bdb6eb6ed76595d8f8e9a6"} Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.538660 4675 scope.go:117] "RemoveContainer" containerID="985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.539050 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.573221 4675 scope.go:117] "RemoveContainer" containerID="68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.621420 4675 scope.go:117] "RemoveContainer" containerID="4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.626102 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z56sj"] Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.639606 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z56sj"] Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.654041 4675 scope.go:117] "RemoveContainer" containerID="985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b" Jan 24 07:40:23 crc kubenswrapper[4675]: E0124 07:40:23.655059 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b\": container with ID starting with 985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b not found: ID does not exist" containerID="985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.655104 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b"} err="failed to get container status \"985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b\": rpc error: code = NotFound desc = could not find container \"985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b\": container with ID starting with 985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b not found: ID does not exist" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.655131 4675 scope.go:117] "RemoveContainer" containerID="68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b" Jan 24 07:40:23 crc kubenswrapper[4675]: E0124 07:40:23.655429 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b\": container with ID starting with 68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b not found: ID does not exist" containerID="68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.655464 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b"} err="failed to get container status \"68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b\": rpc error: code = NotFound desc = could not find container \"68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b\": container with ID starting with 68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b not found: ID does not exist" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.655487 4675 scope.go:117] "RemoveContainer" containerID="4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4" Jan 24 07:40:23 crc kubenswrapper[4675]: E0124 07:40:23.655684 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4\": container with ID starting with 4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4 not found: ID does not exist" containerID="4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.655708 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4"} err="failed to get container status \"4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4\": rpc error: code = NotFound desc = could not find container \"4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4\": container with ID starting with 4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4 not found: ID does not exist" Jan 24 07:40:24 crc kubenswrapper[4675]: I0124 07:40:24.955089 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" path="/var/lib/kubelet/pods/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9/volumes" Jan 24 07:40:47 crc kubenswrapper[4675]: I0124 07:40:47.774265 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4024f70-df50-442c-bcd5-c599d978277c" containerID="c0366ed663d9377d6d87bee264a658d87b4c1d06b5f76d0f3bcdc27e1803092b" exitCode=0 Jan 24 07:40:47 crc kubenswrapper[4675]: I0124 07:40:47.774330 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" event={"ID":"f4024f70-df50-442c-bcd5-c599d978277c","Type":"ContainerDied","Data":"c0366ed663d9377d6d87bee264a658d87b4c1d06b5f76d0f3bcdc27e1803092b"} Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.234256 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428267 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbb69\" (UniqueName: \"kubernetes.io/projected/f4024f70-df50-442c-bcd5-c599d978277c-kube-api-access-xbb69\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428347 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4024f70-df50-442c-bcd5-c599d978277c-nova-extra-config-0\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428385 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-0\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428415 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-combined-ca-bundle\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428433 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-0\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428460 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-1\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428479 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-ssh-key-openstack-edpm-ipam\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428552 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-inventory\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428570 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-1\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.441347 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.443475 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4024f70-df50-442c-bcd5-c599d978277c-kube-api-access-xbb69" (OuterVolumeSpecName: "kube-api-access-xbb69") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "kube-api-access-xbb69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.465374 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4024f70-df50-442c-bcd5-c599d978277c-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.467270 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.476105 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.478636 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.479328 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.492230 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-inventory" (OuterVolumeSpecName: "inventory") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.494966 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532004 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbb69\" (UniqueName: \"kubernetes.io/projected/f4024f70-df50-442c-bcd5-c599d978277c-kube-api-access-xbb69\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532045 4675 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4024f70-df50-442c-bcd5-c599d978277c-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532054 4675 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532063 4675 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532074 4675 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532083 4675 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532092 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532102 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532110 4675 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.804579 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" event={"ID":"f4024f70-df50-442c-bcd5-c599d978277c","Type":"ContainerDied","Data":"2e2ca50b9a099a17d75a3291c986ddd856869d868abc722788811a80f95d193b"} Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.804649 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e2ca50b9a099a17d75a3291c986ddd856869d868abc722788811a80f95d193b" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.805003 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.946388 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx"] Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.946914 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="registry-server" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.946941 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="registry-server" Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.946956 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="extract-utilities" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.946965 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="extract-utilities" Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.946990 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="extract-content" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.946999 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="extract-content" Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.947017 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4024f70-df50-442c-bcd5-c599d978277c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947025 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4024f70-df50-442c-bcd5-c599d978277c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.947035 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="registry-server" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947043 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="registry-server" Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.947065 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="extract-utilities" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947073 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="extract-utilities" Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.947091 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="extract-content" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947100 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="extract-content" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947308 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="registry-server" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947326 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="registry-server" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947342 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4024f70-df50-442c-bcd5-c599d978277c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.948137 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.950677 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.950906 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.951033 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.951065 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.951146 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.973144 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx"] Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.143378 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw765\" (UniqueName: \"kubernetes.io/projected/e47d7738-3361-429e-90f9-02dee4f0052e-kube-api-access-sw765\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.143982 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.144021 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.144089 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.144128 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.144209 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.144285 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.263959 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.264006 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw765\" (UniqueName: \"kubernetes.io/projected/e47d7738-3361-429e-90f9-02dee4f0052e-kube-api-access-sw765\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.264039 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.264095 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.264122 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.265395 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.265779 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.270459 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.270662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.271310 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.271387 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.271792 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.274003 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.286435 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw765\" (UniqueName: \"kubernetes.io/projected/e47d7738-3361-429e-90f9-02dee4f0052e-kube-api-access-sw765\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.578010 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:51 crc kubenswrapper[4675]: W0124 07:40:51.153212 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode47d7738_3361_429e_90f9_02dee4f0052e.slice/crio-3516bc17165950097d59c7f24b005cc86339cda58f74ff42e2f696b96abd3f5d WatchSource:0}: Error finding container 3516bc17165950097d59c7f24b005cc86339cda58f74ff42e2f696b96abd3f5d: Status 404 returned error can't find the container with id 3516bc17165950097d59c7f24b005cc86339cda58f74ff42e2f696b96abd3f5d Jan 24 07:40:51 crc kubenswrapper[4675]: I0124 07:40:51.153264 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx"] Jan 24 07:40:51 crc kubenswrapper[4675]: I0124 07:40:51.824284 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" event={"ID":"e47d7738-3361-429e-90f9-02dee4f0052e","Type":"ContainerStarted","Data":"3516bc17165950097d59c7f24b005cc86339cda58f74ff42e2f696b96abd3f5d"} Jan 24 07:40:52 crc kubenswrapper[4675]: I0124 07:40:52.836276 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" event={"ID":"e47d7738-3361-429e-90f9-02dee4f0052e","Type":"ContainerStarted","Data":"bfbdcdb935d25f2cf70f9a3ec57607a22f91330bedca2616e1a718dd2768d23e"} Jan 24 07:40:52 crc kubenswrapper[4675]: I0124 07:40:52.865923 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" podStartSLOduration=3.049226256 podStartE2EDuration="3.865903134s" podCreationTimestamp="2026-01-24 07:40:49 +0000 UTC" firstStartedPulling="2026-01-24 07:40:51.156548304 +0000 UTC m=+2852.452653537" lastFinishedPulling="2026-01-24 07:40:51.973225182 +0000 UTC m=+2853.269330415" observedRunningTime="2026-01-24 07:40:52.856361172 +0000 UTC m=+2854.152466425" watchObservedRunningTime="2026-01-24 07:40:52.865903134 +0000 UTC m=+2854.162008367" Jan 24 07:41:08 crc kubenswrapper[4675]: I0124 07:41:08.630239 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:41:08 crc kubenswrapper[4675]: I0124 07:41:08.630902 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:41:38 crc kubenswrapper[4675]: I0124 07:41:38.634154 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:41:38 crc kubenswrapper[4675]: I0124 07:41:38.634735 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:42:08 crc kubenswrapper[4675]: I0124 07:42:08.629797 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:42:08 crc kubenswrapper[4675]: I0124 07:42:08.630511 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:42:08 crc kubenswrapper[4675]: I0124 07:42:08.630574 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:42:08 crc kubenswrapper[4675]: I0124 07:42:08.631635 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:42:08 crc kubenswrapper[4675]: I0124 07:42:08.631779 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" gracePeriod=600 Jan 24 07:42:09 crc kubenswrapper[4675]: E0124 07:42:09.480573 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:42:09 crc kubenswrapper[4675]: I0124 07:42:09.780304 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" exitCode=0 Jan 24 07:42:09 crc kubenswrapper[4675]: I0124 07:42:09.780445 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd"} Jan 24 07:42:09 crc kubenswrapper[4675]: I0124 07:42:09.780511 4675 scope.go:117] "RemoveContainer" containerID="d342d159e0fcea37d58e62d2f8a5fb00148cb6bd39bdf7f545a9067214bc08b4" Jan 24 07:42:09 crc kubenswrapper[4675]: I0124 07:42:09.781835 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:42:09 crc kubenswrapper[4675]: E0124 07:42:09.782295 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:42:23 crc kubenswrapper[4675]: I0124 07:42:23.942165 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:42:23 crc kubenswrapper[4675]: E0124 07:42:23.942986 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:42:37 crc kubenswrapper[4675]: I0124 07:42:37.942498 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:42:37 crc kubenswrapper[4675]: E0124 07:42:37.944097 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:42:51 crc kubenswrapper[4675]: I0124 07:42:51.942368 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:42:51 crc kubenswrapper[4675]: E0124 07:42:51.943220 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:43:04 crc kubenswrapper[4675]: I0124 07:43:04.944001 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:43:04 crc kubenswrapper[4675]: E0124 07:43:04.944709 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:43:16 crc kubenswrapper[4675]: I0124 07:43:16.942375 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:43:16 crc kubenswrapper[4675]: E0124 07:43:16.943054 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:43:29 crc kubenswrapper[4675]: I0124 07:43:29.942839 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:43:29 crc kubenswrapper[4675]: E0124 07:43:29.943774 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.181209 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cdqth"] Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.183984 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.190669 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdqth"] Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.299390 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vs85\" (UniqueName: \"kubernetes.io/projected/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-kube-api-access-7vs85\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.299896 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-utilities\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.299932 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-catalog-content\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.402111 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-catalog-content\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.402231 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vs85\" (UniqueName: \"kubernetes.io/projected/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-kube-api-access-7vs85\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.402442 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-utilities\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.402662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-catalog-content\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.402801 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-utilities\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.447987 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vs85\" (UniqueName: \"kubernetes.io/projected/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-kube-api-access-7vs85\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.504216 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.942451 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:43:44 crc kubenswrapper[4675]: E0124 07:43:44.942958 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:43:45 crc kubenswrapper[4675]: I0124 07:43:45.003602 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdqth"] Jan 24 07:43:45 crc kubenswrapper[4675]: I0124 07:43:45.615934 4675 generic.go:334] "Generic (PLEG): container finished" podID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerID="10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2" exitCode=0 Jan 24 07:43:45 crc kubenswrapper[4675]: I0124 07:43:45.616026 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerDied","Data":"10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2"} Jan 24 07:43:45 crc kubenswrapper[4675]: I0124 07:43:45.616209 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerStarted","Data":"0e2ea1a793a15a8b53384e8d6fe49b51ff8604d90512645824867c9af7f24df5"} Jan 24 07:43:46 crc kubenswrapper[4675]: I0124 07:43:46.629263 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerStarted","Data":"15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215"} Jan 24 07:43:50 crc kubenswrapper[4675]: I0124 07:43:50.679487 4675 generic.go:334] "Generic (PLEG): container finished" podID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerID="15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215" exitCode=0 Jan 24 07:43:50 crc kubenswrapper[4675]: I0124 07:43:50.679845 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerDied","Data":"15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215"} Jan 24 07:43:52 crc kubenswrapper[4675]: I0124 07:43:52.700222 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerStarted","Data":"2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb"} Jan 24 07:43:52 crc kubenswrapper[4675]: I0124 07:43:52.727502 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cdqth" podStartSLOduration=2.154755847 podStartE2EDuration="8.727482191s" podCreationTimestamp="2026-01-24 07:43:44 +0000 UTC" firstStartedPulling="2026-01-24 07:43:45.617288932 +0000 UTC m=+3026.913394155" lastFinishedPulling="2026-01-24 07:43:52.190015276 +0000 UTC m=+3033.486120499" observedRunningTime="2026-01-24 07:43:52.72044931 +0000 UTC m=+3034.016554533" watchObservedRunningTime="2026-01-24 07:43:52.727482191 +0000 UTC m=+3034.023587404" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.505156 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.505486 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.693594 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7b874"] Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.695900 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.707509 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7b874"] Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.810738 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-catalog-content\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.810804 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94d6w\" (UniqueName: \"kubernetes.io/projected/58c4feae-4844-4d57-abb6-e3128e04b0d8-kube-api-access-94d6w\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.810860 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-utilities\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.913225 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-catalog-content\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.913806 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94d6w\" (UniqueName: \"kubernetes.io/projected/58c4feae-4844-4d57-abb6-e3128e04b0d8-kube-api-access-94d6w\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.914012 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-utilities\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.914246 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-catalog-content\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.914690 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-utilities\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.935984 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94d6w\" (UniqueName: \"kubernetes.io/projected/58c4feae-4844-4d57-abb6-e3128e04b0d8-kube-api-access-94d6w\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:55 crc kubenswrapper[4675]: I0124 07:43:55.032262 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:55 crc kubenswrapper[4675]: I0124 07:43:55.556399 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cdqth" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="registry-server" probeResult="failure" output=< Jan 24 07:43:55 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:43:55 crc kubenswrapper[4675]: > Jan 24 07:43:55 crc kubenswrapper[4675]: I0124 07:43:55.668771 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7b874"] Jan 24 07:43:55 crc kubenswrapper[4675]: W0124 07:43:55.675567 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58c4feae_4844_4d57_abb6_e3128e04b0d8.slice/crio-0c64c1b4fe0d7407a2bf4f2f1730f8f1c91229b38701b50edc9a470e90e0be9f WatchSource:0}: Error finding container 0c64c1b4fe0d7407a2bf4f2f1730f8f1c91229b38701b50edc9a470e90e0be9f: Status 404 returned error can't find the container with id 0c64c1b4fe0d7407a2bf4f2f1730f8f1c91229b38701b50edc9a470e90e0be9f Jan 24 07:43:55 crc kubenswrapper[4675]: I0124 07:43:55.767819 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerStarted","Data":"0c64c1b4fe0d7407a2bf4f2f1730f8f1c91229b38701b50edc9a470e90e0be9f"} Jan 24 07:43:56 crc kubenswrapper[4675]: I0124 07:43:56.778445 4675 generic.go:334] "Generic (PLEG): container finished" podID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerID="416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100" exitCode=0 Jan 24 07:43:56 crc kubenswrapper[4675]: I0124 07:43:56.778833 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerDied","Data":"416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100"} Jan 24 07:43:57 crc kubenswrapper[4675]: I0124 07:43:57.788103 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerStarted","Data":"cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95"} Jan 24 07:43:58 crc kubenswrapper[4675]: I0124 07:43:58.799611 4675 generic.go:334] "Generic (PLEG): container finished" podID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerID="cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95" exitCode=0 Jan 24 07:43:58 crc kubenswrapper[4675]: I0124 07:43:58.799760 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerDied","Data":"cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95"} Jan 24 07:43:59 crc kubenswrapper[4675]: I0124 07:43:59.810440 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerStarted","Data":"29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3"} Jan 24 07:43:59 crc kubenswrapper[4675]: I0124 07:43:59.833070 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7b874" podStartSLOduration=3.13977233 podStartE2EDuration="5.833055078s" podCreationTimestamp="2026-01-24 07:43:54 +0000 UTC" firstStartedPulling="2026-01-24 07:43:56.781203127 +0000 UTC m=+3038.077308350" lastFinishedPulling="2026-01-24 07:43:59.474485875 +0000 UTC m=+3040.770591098" observedRunningTime="2026-01-24 07:43:59.831794817 +0000 UTC m=+3041.127900040" watchObservedRunningTime="2026-01-24 07:43:59.833055078 +0000 UTC m=+3041.129160301" Jan 24 07:43:59 crc kubenswrapper[4675]: I0124 07:43:59.949028 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:43:59 crc kubenswrapper[4675]: E0124 07:43:59.949288 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:44:05 crc kubenswrapper[4675]: I0124 07:44:05.032786 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:44:05 crc kubenswrapper[4675]: I0124 07:44:05.033652 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:44:05 crc kubenswrapper[4675]: I0124 07:44:05.094767 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:44:05 crc kubenswrapper[4675]: I0124 07:44:05.559977 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cdqth" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="registry-server" probeResult="failure" output=< Jan 24 07:44:05 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:44:05 crc kubenswrapper[4675]: > Jan 24 07:44:05 crc kubenswrapper[4675]: I0124 07:44:05.963004 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:44:06 crc kubenswrapper[4675]: I0124 07:44:06.011443 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7b874"] Jan 24 07:44:07 crc kubenswrapper[4675]: I0124 07:44:07.896908 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7b874" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="registry-server" containerID="cri-o://29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3" gracePeriod=2 Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.373629 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.480436 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-catalog-content\") pod \"58c4feae-4844-4d57-abb6-e3128e04b0d8\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.480523 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94d6w\" (UniqueName: \"kubernetes.io/projected/58c4feae-4844-4d57-abb6-e3128e04b0d8-kube-api-access-94d6w\") pod \"58c4feae-4844-4d57-abb6-e3128e04b0d8\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.480570 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-utilities\") pod \"58c4feae-4844-4d57-abb6-e3128e04b0d8\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.481365 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-utilities" (OuterVolumeSpecName: "utilities") pod "58c4feae-4844-4d57-abb6-e3128e04b0d8" (UID: "58c4feae-4844-4d57-abb6-e3128e04b0d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.486096 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c4feae-4844-4d57-abb6-e3128e04b0d8-kube-api-access-94d6w" (OuterVolumeSpecName: "kube-api-access-94d6w") pod "58c4feae-4844-4d57-abb6-e3128e04b0d8" (UID: "58c4feae-4844-4d57-abb6-e3128e04b0d8"). InnerVolumeSpecName "kube-api-access-94d6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.526083 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58c4feae-4844-4d57-abb6-e3128e04b0d8" (UID: "58c4feae-4844-4d57-abb6-e3128e04b0d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.583138 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.583181 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94d6w\" (UniqueName: \"kubernetes.io/projected/58c4feae-4844-4d57-abb6-e3128e04b0d8-kube-api-access-94d6w\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.583196 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.909058 4675 generic.go:334] "Generic (PLEG): container finished" podID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerID="29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3" exitCode=0 Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.909099 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerDied","Data":"29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3"} Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.909127 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.909152 4675 scope.go:117] "RemoveContainer" containerID="29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.909139 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerDied","Data":"0c64c1b4fe0d7407a2bf4f2f1730f8f1c91229b38701b50edc9a470e90e0be9f"} Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.934023 4675 scope.go:117] "RemoveContainer" containerID="cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.968972 4675 scope.go:117] "RemoveContainer" containerID="416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.983603 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7b874"] Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.994034 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7b874"] Jan 24 07:44:09 crc kubenswrapper[4675]: I0124 07:44:09.012303 4675 scope.go:117] "RemoveContainer" containerID="29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3" Jan 24 07:44:09 crc kubenswrapper[4675]: E0124 07:44:09.012838 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3\": container with ID starting with 29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3 not found: ID does not exist" containerID="29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3" Jan 24 07:44:09 crc kubenswrapper[4675]: I0124 07:44:09.012869 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3"} err="failed to get container status \"29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3\": rpc error: code = NotFound desc = could not find container \"29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3\": container with ID starting with 29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3 not found: ID does not exist" Jan 24 07:44:09 crc kubenswrapper[4675]: I0124 07:44:09.012889 4675 scope.go:117] "RemoveContainer" containerID="cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95" Jan 24 07:44:09 crc kubenswrapper[4675]: E0124 07:44:09.013147 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95\": container with ID starting with cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95 not found: ID does not exist" containerID="cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95" Jan 24 07:44:09 crc kubenswrapper[4675]: I0124 07:44:09.013180 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95"} err="failed to get container status \"cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95\": rpc error: code = NotFound desc = could not find container \"cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95\": container with ID starting with cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95 not found: ID does not exist" Jan 24 07:44:09 crc kubenswrapper[4675]: I0124 07:44:09.013199 4675 scope.go:117] "RemoveContainer" containerID="416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100" Jan 24 07:44:09 crc kubenswrapper[4675]: E0124 07:44:09.013541 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100\": container with ID starting with 416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100 not found: ID does not exist" containerID="416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100" Jan 24 07:44:09 crc kubenswrapper[4675]: I0124 07:44:09.013564 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100"} err="failed to get container status \"416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100\": rpc error: code = NotFound desc = could not find container \"416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100\": container with ID starting with 416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100 not found: ID does not exist" Jan 24 07:44:10 crc kubenswrapper[4675]: I0124 07:44:10.957206 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" path="/var/lib/kubelet/pods/58c4feae-4844-4d57-abb6-e3128e04b0d8/volumes" Jan 24 07:44:11 crc kubenswrapper[4675]: I0124 07:44:11.943606 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:44:11 crc kubenswrapper[4675]: E0124 07:44:11.944211 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:44:14 crc kubenswrapper[4675]: I0124 07:44:14.557113 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:44:14 crc kubenswrapper[4675]: I0124 07:44:14.624356 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:44:15 crc kubenswrapper[4675]: I0124 07:44:15.384011 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdqth"] Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.002382 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cdqth" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="registry-server" containerID="cri-o://2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb" gracePeriod=2 Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.441980 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.569482 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vs85\" (UniqueName: \"kubernetes.io/projected/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-kube-api-access-7vs85\") pod \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.569631 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-utilities\") pod \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.569675 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-catalog-content\") pod \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.570483 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-utilities" (OuterVolumeSpecName: "utilities") pod "2ffaa9bd-d5dd-4a69-a74b-239be16a2199" (UID: "2ffaa9bd-d5dd-4a69-a74b-239be16a2199"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.575322 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-kube-api-access-7vs85" (OuterVolumeSpecName: "kube-api-access-7vs85") pod "2ffaa9bd-d5dd-4a69-a74b-239be16a2199" (UID: "2ffaa9bd-d5dd-4a69-a74b-239be16a2199"). InnerVolumeSpecName "kube-api-access-7vs85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.669291 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ffaa9bd-d5dd-4a69-a74b-239be16a2199" (UID: "2ffaa9bd-d5dd-4a69-a74b-239be16a2199"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.671873 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vs85\" (UniqueName: \"kubernetes.io/projected/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-kube-api-access-7vs85\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.671906 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.671916 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.014699 4675 generic.go:334] "Generic (PLEG): container finished" podID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerID="2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb" exitCode=0 Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.014798 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerDied","Data":"2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb"} Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.015068 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerDied","Data":"0e2ea1a793a15a8b53384e8d6fe49b51ff8604d90512645824867c9af7f24df5"} Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.014853 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.015226 4675 scope.go:117] "RemoveContainer" containerID="2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.046751 4675 scope.go:117] "RemoveContainer" containerID="15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.075298 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdqth"] Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.086080 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cdqth"] Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.096747 4675 scope.go:117] "RemoveContainer" containerID="10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.145944 4675 scope.go:117] "RemoveContainer" containerID="2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb" Jan 24 07:44:17 crc kubenswrapper[4675]: E0124 07:44:17.146363 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb\": container with ID starting with 2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb not found: ID does not exist" containerID="2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.146454 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb"} err="failed to get container status \"2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb\": rpc error: code = NotFound desc = could not find container \"2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb\": container with ID starting with 2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb not found: ID does not exist" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.146538 4675 scope.go:117] "RemoveContainer" containerID="15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215" Jan 24 07:44:17 crc kubenswrapper[4675]: E0124 07:44:17.147453 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215\": container with ID starting with 15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215 not found: ID does not exist" containerID="15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.147566 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215"} err="failed to get container status \"15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215\": rpc error: code = NotFound desc = could not find container \"15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215\": container with ID starting with 15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215 not found: ID does not exist" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.147630 4675 scope.go:117] "RemoveContainer" containerID="10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2" Jan 24 07:44:17 crc kubenswrapper[4675]: E0124 07:44:17.148190 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2\": container with ID starting with 10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2 not found: ID does not exist" containerID="10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.148239 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2"} err="failed to get container status \"10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2\": rpc error: code = NotFound desc = could not find container \"10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2\": container with ID starting with 10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2 not found: ID does not exist" Jan 24 07:44:18 crc kubenswrapper[4675]: I0124 07:44:18.959305 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" path="/var/lib/kubelet/pods/2ffaa9bd-d5dd-4a69-a74b-239be16a2199/volumes" Jan 24 07:44:22 crc kubenswrapper[4675]: I0124 07:44:22.943608 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:44:22 crc kubenswrapper[4675]: E0124 07:44:22.944357 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:44:36 crc kubenswrapper[4675]: I0124 07:44:36.942528 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:44:36 crc kubenswrapper[4675]: E0124 07:44:36.943184 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:44:41 crc kubenswrapper[4675]: I0124 07:44:41.230140 4675 generic.go:334] "Generic (PLEG): container finished" podID="e47d7738-3361-429e-90f9-02dee4f0052e" containerID="bfbdcdb935d25f2cf70f9a3ec57607a22f91330bedca2616e1a718dd2768d23e" exitCode=0 Jan 24 07:44:41 crc kubenswrapper[4675]: I0124 07:44:41.230238 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" event={"ID":"e47d7738-3361-429e-90f9-02dee4f0052e","Type":"ContainerDied","Data":"bfbdcdb935d25f2cf70f9a3ec57607a22f91330bedca2616e1a718dd2768d23e"} Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.689628 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.744875 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-telemetry-combined-ca-bundle\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.744931 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-1\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.744993 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-0\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.745025 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-inventory\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.745217 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw765\" (UniqueName: \"kubernetes.io/projected/e47d7738-3361-429e-90f9-02dee4f0052e-kube-api-access-sw765\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.745252 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-2\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.745323 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ssh-key-openstack-edpm-ipam\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.759938 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.765621 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47d7738-3361-429e-90f9-02dee4f0052e-kube-api-access-sw765" (OuterVolumeSpecName: "kube-api-access-sw765") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "kube-api-access-sw765". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.774526 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.777267 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-inventory" (OuterVolumeSpecName: "inventory") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.777874 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.789484 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.789761 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848530 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw765\" (UniqueName: \"kubernetes.io/projected/e47d7738-3361-429e-90f9-02dee4f0052e-kube-api-access-sw765\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848566 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848580 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848593 4675 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848605 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848616 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848629 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:43 crc kubenswrapper[4675]: I0124 07:44:43.254332 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" event={"ID":"e47d7738-3361-429e-90f9-02dee4f0052e","Type":"ContainerDied","Data":"3516bc17165950097d59c7f24b005cc86339cda58f74ff42e2f696b96abd3f5d"} Jan 24 07:44:43 crc kubenswrapper[4675]: I0124 07:44:43.254687 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3516bc17165950097d59c7f24b005cc86339cda58f74ff42e2f696b96abd3f5d" Jan 24 07:44:43 crc kubenswrapper[4675]: I0124 07:44:43.254475 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:44:51 crc kubenswrapper[4675]: I0124 07:44:51.942284 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:44:51 crc kubenswrapper[4675]: E0124 07:44:51.944381 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.149942 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x"] Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151009 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="extract-utilities" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151029 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="extract-utilities" Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151055 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="extract-content" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151064 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="extract-content" Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151091 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="extract-content" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151100 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="extract-content" Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151111 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47d7738-3361-429e-90f9-02dee4f0052e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151119 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47d7738-3361-429e-90f9-02dee4f0052e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151138 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="registry-server" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151146 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="registry-server" Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151160 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="extract-utilities" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151170 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="extract-utilities" Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151187 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="registry-server" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151194 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="registry-server" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151411 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="registry-server" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151438 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47d7738-3361-429e-90f9-02dee4f0052e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151462 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="registry-server" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.152278 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.160053 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.160053 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.173797 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x"] Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.208751 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbxw7\" (UniqueName: \"kubernetes.io/projected/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-kube-api-access-fbxw7\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.208828 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-config-volume\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.209024 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-secret-volume\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.310784 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbxw7\" (UniqueName: \"kubernetes.io/projected/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-kube-api-access-fbxw7\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.310868 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-config-volume\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.310920 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-secret-volume\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.312044 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-config-volume\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.333453 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-secret-volume\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.333801 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbxw7\" (UniqueName: \"kubernetes.io/projected/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-kube-api-access-fbxw7\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.474884 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:01 crc kubenswrapper[4675]: I0124 07:45:01.004623 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x"] Jan 24 07:45:01 crc kubenswrapper[4675]: I0124 07:45:01.511871 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" event={"ID":"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd","Type":"ContainerStarted","Data":"0d6f4c897e3fd3da783cc55d6eee5105fa978a22baf600d528d5387a2c99beeb"} Jan 24 07:45:01 crc kubenswrapper[4675]: I0124 07:45:01.514179 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" event={"ID":"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd","Type":"ContainerStarted","Data":"1874a7e18cac42a442f2f74841584b88f2922de05742b97b2f3ab3b8ad02b02d"} Jan 24 07:45:01 crc kubenswrapper[4675]: I0124 07:45:01.542056 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" podStartSLOduration=1.542038815 podStartE2EDuration="1.542038815s" podCreationTimestamp="2026-01-24 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:45:01.536995173 +0000 UTC m=+3102.833100396" watchObservedRunningTime="2026-01-24 07:45:01.542038815 +0000 UTC m=+3102.838144038" Jan 24 07:45:02 crc kubenswrapper[4675]: I0124 07:45:02.521834 4675 generic.go:334] "Generic (PLEG): container finished" podID="9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" containerID="0d6f4c897e3fd3da783cc55d6eee5105fa978a22baf600d528d5387a2c99beeb" exitCode=0 Jan 24 07:45:02 crc kubenswrapper[4675]: I0124 07:45:02.522528 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" event={"ID":"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd","Type":"ContainerDied","Data":"0d6f4c897e3fd3da783cc55d6eee5105fa978a22baf600d528d5387a2c99beeb"} Jan 24 07:45:03 crc kubenswrapper[4675]: I0124 07:45:03.854300 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:03 crc kubenswrapper[4675]: I0124 07:45:03.943457 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:45:03 crc kubenswrapper[4675]: E0124 07:45:03.944089 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.012429 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-secret-volume\") pod \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.012505 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbxw7\" (UniqueName: \"kubernetes.io/projected/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-kube-api-access-fbxw7\") pod \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.012590 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-config-volume\") pod \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.013668 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" (UID: "9c752fa6-1fb5-4e20-a186-fe950e9fc3bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.024573 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" (UID: "9c752fa6-1fb5-4e20-a186-fe950e9fc3bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.027019 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-kube-api-access-fbxw7" (OuterVolumeSpecName: "kube-api-access-fbxw7") pod "9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" (UID: "9c752fa6-1fb5-4e20-a186-fe950e9fc3bd"). InnerVolumeSpecName "kube-api-access-fbxw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.116623 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.116661 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbxw7\" (UniqueName: \"kubernetes.io/projected/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-kube-api-access-fbxw7\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.116670 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.547904 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" event={"ID":"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd","Type":"ContainerDied","Data":"1874a7e18cac42a442f2f74841584b88f2922de05742b97b2f3ab3b8ad02b02d"} Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.547973 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1874a7e18cac42a442f2f74841584b88f2922de05742b97b2f3ab3b8ad02b02d" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.548047 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.641755 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh"] Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.654161 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh"] Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.958440 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df2777ca-be51-4dc3-b7da-d84bd7ca16c4" path="/var/lib/kubelet/pods/df2777ca-be51-4dc3-b7da-d84bd7ca16c4/volumes" Jan 24 07:45:18 crc kubenswrapper[4675]: I0124 07:45:18.987542 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:45:18 crc kubenswrapper[4675]: E0124 07:45:18.988917 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.649553 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 24 07:45:28 crc kubenswrapper[4675]: E0124 07:45:28.650512 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" containerName="collect-profiles" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.650528 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" containerName="collect-profiles" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.650779 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" containerName="collect-profiles" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.651513 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.654269 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.654552 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zffgp" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.659611 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.660982 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.661435 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.764091 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-config-data\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.764524 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.764684 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.765024 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2x7r\" (UniqueName: \"kubernetes.io/projected/0e021dd7-397f-4546-a38b-c8c13a1c830d-kube-api-access-n2x7r\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.765306 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.765484 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.765679 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.765822 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.765977 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868066 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868141 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868175 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2x7r\" (UniqueName: \"kubernetes.io/projected/0e021dd7-397f-4546-a38b-c8c13a1c830d-kube-api-access-n2x7r\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868296 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868337 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868396 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868430 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868502 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868589 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-config-data\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.869554 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.870451 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.870805 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.871160 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.871363 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-config-data\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.877430 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.877439 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.891610 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.896767 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2x7r\" (UniqueName: \"kubernetes.io/projected/0e021dd7-397f-4546-a38b-c8c13a1c830d-kube-api-access-n2x7r\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.904425 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.984748 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 24 07:45:29 crc kubenswrapper[4675]: I0124 07:45:29.480513 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:45:29 crc kubenswrapper[4675]: I0124 07:45:29.487282 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 24 07:45:29 crc kubenswrapper[4675]: I0124 07:45:29.846534 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e021dd7-397f-4546-a38b-c8c13a1c830d","Type":"ContainerStarted","Data":"b58f40ef8b59d9e13d49226574fe9e8046a0f1e06fd832dbb76212fb15d47084"} Jan 24 07:45:31 crc kubenswrapper[4675]: I0124 07:45:31.950800 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:45:31 crc kubenswrapper[4675]: E0124 07:45:31.951303 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:45:41 crc kubenswrapper[4675]: I0124 07:45:41.673077 4675 scope.go:117] "RemoveContainer" containerID="2ee1de4c569b0dfae84a9127d5e07bf0bf62a91389eaf5b8b6361fce4ef2d02f" Jan 24 07:45:44 crc kubenswrapper[4675]: I0124 07:45:44.943948 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:45:44 crc kubenswrapper[4675]: E0124 07:45:44.944830 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:45:55 crc kubenswrapper[4675]: I0124 07:45:55.943252 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:45:55 crc kubenswrapper[4675]: E0124 07:45:55.944108 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:46:06 crc kubenswrapper[4675]: I0124 07:46:06.943025 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:46:06 crc kubenswrapper[4675]: E0124 07:46:06.943643 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:46:10 crc kubenswrapper[4675]: E0124 07:46:10.968455 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 24 07:46:10 crc kubenswrapper[4675]: E0124 07:46:10.970396 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2x7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(0e021dd7-397f-4546-a38b-c8c13a1c830d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:46:10 crc kubenswrapper[4675]: E0124 07:46:10.971634 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="0e021dd7-397f-4546-a38b-c8c13a1c830d" Jan 24 07:46:11 crc kubenswrapper[4675]: E0124 07:46:11.234311 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="0e021dd7-397f-4546-a38b-c8c13a1c830d" Jan 24 07:46:19 crc kubenswrapper[4675]: I0124 07:46:19.943053 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:46:19 crc kubenswrapper[4675]: E0124 07:46:19.943780 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:46:23 crc kubenswrapper[4675]: I0124 07:46:23.420524 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 24 07:46:25 crc kubenswrapper[4675]: I0124 07:46:25.367283 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e021dd7-397f-4546-a38b-c8c13a1c830d","Type":"ContainerStarted","Data":"385c10b2b55e4de5d8ab2c6942a73ce0435169e041a1198367e1b28172ea5233"} Jan 24 07:46:25 crc kubenswrapper[4675]: I0124 07:46:25.403147 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.464930784 podStartE2EDuration="58.403130368s" podCreationTimestamp="2026-01-24 07:45:27 +0000 UTC" firstStartedPulling="2026-01-24 07:45:29.480115271 +0000 UTC m=+3130.776220534" lastFinishedPulling="2026-01-24 07:46:23.418314895 +0000 UTC m=+3184.714420118" observedRunningTime="2026-01-24 07:46:25.393211337 +0000 UTC m=+3186.689316560" watchObservedRunningTime="2026-01-24 07:46:25.403130368 +0000 UTC m=+3186.699235591" Jan 24 07:46:32 crc kubenswrapper[4675]: I0124 07:46:32.943172 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:46:32 crc kubenswrapper[4675]: E0124 07:46:32.944050 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:46:47 crc kubenswrapper[4675]: I0124 07:46:47.943100 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:46:47 crc kubenswrapper[4675]: E0124 07:46:47.943776 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:47:02 crc kubenswrapper[4675]: I0124 07:47:02.943032 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:47:02 crc kubenswrapper[4675]: E0124 07:47:02.943845 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:47:16 crc kubenswrapper[4675]: I0124 07:47:16.942543 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:47:17 crc kubenswrapper[4675]: I0124 07:47:17.877661 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"0a6a61085acc256df533325dcd06e983d7d8d92776e231385eb1bb2bd81838c9"} Jan 24 07:47:54 crc kubenswrapper[4675]: I0124 07:47:54.234658 4675 generic.go:334] "Generic (PLEG): container finished" podID="0e021dd7-397f-4546-a38b-c8c13a1c830d" containerID="385c10b2b55e4de5d8ab2c6942a73ce0435169e041a1198367e1b28172ea5233" exitCode=0 Jan 24 07:47:54 crc kubenswrapper[4675]: I0124 07:47:54.234845 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e021dd7-397f-4546-a38b-c8c13a1c830d","Type":"ContainerDied","Data":"385c10b2b55e4de5d8ab2c6942a73ce0435169e041a1198367e1b28172ea5233"} Jan 24 07:47:55 crc kubenswrapper[4675]: I0124 07:47:55.947167 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090607 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090742 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-workdir\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090800 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-config-data\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090847 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ca-certs\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090873 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config-secret\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090909 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090969 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ssh-key\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.091012 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2x7r\" (UniqueName: \"kubernetes.io/projected/0e021dd7-397f-4546-a38b-c8c13a1c830d-kube-api-access-n2x7r\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.091095 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-temporary\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.092050 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-config-data" (OuterVolumeSpecName: "config-data") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.095694 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.095951 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.106325 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.109043 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e021dd7-397f-4546-a38b-c8c13a1c830d-kube-api-access-n2x7r" (OuterVolumeSpecName: "kube-api-access-n2x7r") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "kube-api-access-n2x7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.127518 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.131786 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.155908 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.172848 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.193859 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2x7r\" (UniqueName: \"kubernetes.io/projected/0e021dd7-397f-4546-a38b-c8c13a1c830d-kube-api-access-n2x7r\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.193916 4675 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.193932 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.193946 4675 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.193959 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.193991 4675 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.194002 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.194042 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.196060 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.215169 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.259061 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e021dd7-397f-4546-a38b-c8c13a1c830d","Type":"ContainerDied","Data":"b58f40ef8b59d9e13d49226574fe9e8046a0f1e06fd832dbb76212fb15d47084"} Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.259098 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b58f40ef8b59d9e13d49226574fe9e8046a0f1e06fd832dbb76212fb15d47084" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.259151 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.298415 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.690768 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 24 07:48:02 crc kubenswrapper[4675]: E0124 07:48:02.692995 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e021dd7-397f-4546-a38b-c8c13a1c830d" containerName="tempest-tests-tempest-tests-runner" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.693106 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e021dd7-397f-4546-a38b-c8c13a1c830d" containerName="tempest-tests-tempest-tests-runner" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.720169 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e021dd7-397f-4546-a38b-c8c13a1c830d" containerName="tempest-tests-tempest-tests-runner" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.721115 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.724091 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.738246 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zffgp" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.824920 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.825052 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkdvz\" (UniqueName: \"kubernetes.io/projected/6051ab9a-5c43-4757-a1ff-3f199dee0a79-kube-api-access-vkdvz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.926199 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.926319 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkdvz\" (UniqueName: \"kubernetes.io/projected/6051ab9a-5c43-4757-a1ff-3f199dee0a79-kube-api-access-vkdvz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.926816 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.956834 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkdvz\" (UniqueName: \"kubernetes.io/projected/6051ab9a-5c43-4757-a1ff-3f199dee0a79-kube-api-access-vkdvz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.977809 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:03 crc kubenswrapper[4675]: I0124 07:48:03.061387 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:03 crc kubenswrapper[4675]: I0124 07:48:03.570378 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 24 07:48:03 crc kubenswrapper[4675]: W0124 07:48:03.592860 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6051ab9a_5c43_4757_a1ff_3f199dee0a79.slice/crio-b2faa99ec00f9b711122fe80b2d38a0d7e703e25105a40a7ed0b6c99e65e4999 WatchSource:0}: Error finding container b2faa99ec00f9b711122fe80b2d38a0d7e703e25105a40a7ed0b6c99e65e4999: Status 404 returned error can't find the container with id b2faa99ec00f9b711122fe80b2d38a0d7e703e25105a40a7ed0b6c99e65e4999 Jan 24 07:48:04 crc kubenswrapper[4675]: I0124 07:48:04.358339 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6051ab9a-5c43-4757-a1ff-3f199dee0a79","Type":"ContainerStarted","Data":"b2faa99ec00f9b711122fe80b2d38a0d7e703e25105a40a7ed0b6c99e65e4999"} Jan 24 07:48:05 crc kubenswrapper[4675]: I0124 07:48:05.369639 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6051ab9a-5c43-4757-a1ff-3f199dee0a79","Type":"ContainerStarted","Data":"2ecd5de1e849759e8ab9154c1d74fc7b627da686494e2580089cd17c90952a4d"} Jan 24 07:48:05 crc kubenswrapper[4675]: I0124 07:48:05.392861 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.370970201 podStartE2EDuration="3.392835733s" podCreationTimestamp="2026-01-24 07:48:02 +0000 UTC" firstStartedPulling="2026-01-24 07:48:03.608085585 +0000 UTC m=+3284.904190818" lastFinishedPulling="2026-01-24 07:48:04.629951077 +0000 UTC m=+3285.926056350" observedRunningTime="2026-01-24 07:48:05.383914776 +0000 UTC m=+3286.680019999" watchObservedRunningTime="2026-01-24 07:48:05.392835733 +0000 UTC m=+3286.688940966" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.210539 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rk4k5/must-gather-w5wnw"] Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.212871 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.217025 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rk4k5"/"openshift-service-ca.crt" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.217148 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rk4k5"/"kube-root-ca.crt" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.244230 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rk4k5/must-gather-w5wnw"] Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.264460 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/179bf7a3-0095-4057-b946-ac2ee02c99ef-must-gather-output\") pod \"must-gather-w5wnw\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.264581 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4db4\" (UniqueName: \"kubernetes.io/projected/179bf7a3-0095-4057-b946-ac2ee02c99ef-kube-api-access-w4db4\") pod \"must-gather-w5wnw\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.366355 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4db4\" (UniqueName: \"kubernetes.io/projected/179bf7a3-0095-4057-b946-ac2ee02c99ef-kube-api-access-w4db4\") pod \"must-gather-w5wnw\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.366482 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/179bf7a3-0095-4057-b946-ac2ee02c99ef-must-gather-output\") pod \"must-gather-w5wnw\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.366869 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/179bf7a3-0095-4057-b946-ac2ee02c99ef-must-gather-output\") pod \"must-gather-w5wnw\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.408559 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4db4\" (UniqueName: \"kubernetes.io/projected/179bf7a3-0095-4057-b946-ac2ee02c99ef-kube-api-access-w4db4\") pod \"must-gather-w5wnw\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.529237 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.994053 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rk4k5/must-gather-w5wnw"] Jan 24 07:48:30 crc kubenswrapper[4675]: W0124 07:48:30.999751 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod179bf7a3_0095_4057_b946_ac2ee02c99ef.slice/crio-9a67f0e263d9acd21bdc9b203979f276fd2c4ed7b1f2a6929a752ca8ae940f19 WatchSource:0}: Error finding container 9a67f0e263d9acd21bdc9b203979f276fd2c4ed7b1f2a6929a752ca8ae940f19: Status 404 returned error can't find the container with id 9a67f0e263d9acd21bdc9b203979f276fd2c4ed7b1f2a6929a752ca8ae940f19 Jan 24 07:48:31 crc kubenswrapper[4675]: I0124 07:48:31.631064 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" event={"ID":"179bf7a3-0095-4057-b946-ac2ee02c99ef","Type":"ContainerStarted","Data":"9a67f0e263d9acd21bdc9b203979f276fd2c4ed7b1f2a6929a752ca8ae940f19"} Jan 24 07:48:40 crc kubenswrapper[4675]: I0124 07:48:40.720105 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" event={"ID":"179bf7a3-0095-4057-b946-ac2ee02c99ef","Type":"ContainerStarted","Data":"846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2"} Jan 24 07:48:40 crc kubenswrapper[4675]: I0124 07:48:40.720607 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" event={"ID":"179bf7a3-0095-4057-b946-ac2ee02c99ef","Type":"ContainerStarted","Data":"50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588"} Jan 24 07:48:40 crc kubenswrapper[4675]: I0124 07:48:40.749059 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" podStartSLOduration=1.9752995260000001 podStartE2EDuration="10.749036161s" podCreationTimestamp="2026-01-24 07:48:30 +0000 UTC" firstStartedPulling="2026-01-24 07:48:31.001765248 +0000 UTC m=+3312.297870471" lastFinishedPulling="2026-01-24 07:48:39.775501883 +0000 UTC m=+3321.071607106" observedRunningTime="2026-01-24 07:48:40.733549715 +0000 UTC m=+3322.029654938" watchObservedRunningTime="2026-01-24 07:48:40.749036161 +0000 UTC m=+3322.045141384" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.701065 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rk4k5/crc-debug-kcg4w"] Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.705857 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.707420 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rk4k5"/"default-dockercfg-grs5v" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.819943 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-host\") pod \"crc-debug-kcg4w\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.820112 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4gnc\" (UniqueName: \"kubernetes.io/projected/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-kube-api-access-d4gnc\") pod \"crc-debug-kcg4w\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.921778 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4gnc\" (UniqueName: \"kubernetes.io/projected/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-kube-api-access-d4gnc\") pod \"crc-debug-kcg4w\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.921849 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-host\") pod \"crc-debug-kcg4w\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.922019 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-host\") pod \"crc-debug-kcg4w\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.938367 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4gnc\" (UniqueName: \"kubernetes.io/projected/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-kube-api-access-d4gnc\") pod \"crc-debug-kcg4w\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:45 crc kubenswrapper[4675]: I0124 07:48:45.030158 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:45 crc kubenswrapper[4675]: W0124 07:48:45.081116 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f8fcc1d_4f4e_4756_a0ee_dad9db37efd0.slice/crio-765f5a2c75ce4bde49578f8726b2b1797d29bab8cd8ea6d1a31e4bd0cb2c136e WatchSource:0}: Error finding container 765f5a2c75ce4bde49578f8726b2b1797d29bab8cd8ea6d1a31e4bd0cb2c136e: Status 404 returned error can't find the container with id 765f5a2c75ce4bde49578f8726b2b1797d29bab8cd8ea6d1a31e4bd0cb2c136e Jan 24 07:48:45 crc kubenswrapper[4675]: I0124 07:48:45.766583 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" event={"ID":"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0","Type":"ContainerStarted","Data":"765f5a2c75ce4bde49578f8726b2b1797d29bab8cd8ea6d1a31e4bd0cb2c136e"} Jan 24 07:48:57 crc kubenswrapper[4675]: I0124 07:48:57.882663 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" event={"ID":"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0","Type":"ContainerStarted","Data":"c858779e5ecc8a74de00c097b5497a5ef52c52bc32dec8633fa758d97f5458dd"} Jan 24 07:48:57 crc kubenswrapper[4675]: I0124 07:48:57.905383 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" podStartSLOduration=2.249167843 podStartE2EDuration="13.905352466s" podCreationTimestamp="2026-01-24 07:48:44 +0000 UTC" firstStartedPulling="2026-01-24 07:48:45.08592206 +0000 UTC m=+3326.382027283" lastFinishedPulling="2026-01-24 07:48:56.742106683 +0000 UTC m=+3338.038211906" observedRunningTime="2026-01-24 07:48:57.903314287 +0000 UTC m=+3339.199419510" watchObservedRunningTime="2026-01-24 07:48:57.905352466 +0000 UTC m=+3339.201457689" Jan 24 07:49:12 crc kubenswrapper[4675]: I0124 07:49:12.005843 4675 generic.go:334] "Generic (PLEG): container finished" podID="5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" containerID="c858779e5ecc8a74de00c097b5497a5ef52c52bc32dec8633fa758d97f5458dd" exitCode=0 Jan 24 07:49:12 crc kubenswrapper[4675]: I0124 07:49:12.005923 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" event={"ID":"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0","Type":"ContainerDied","Data":"c858779e5ecc8a74de00c097b5497a5ef52c52bc32dec8633fa758d97f5458dd"} Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.114212 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.157770 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rk4k5/crc-debug-kcg4w"] Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.164762 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rk4k5/crc-debug-kcg4w"] Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.186980 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-host\") pod \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.187142 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-host" (OuterVolumeSpecName: "host") pod "5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" (UID: "5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.187416 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4gnc\" (UniqueName: \"kubernetes.io/projected/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-kube-api-access-d4gnc\") pod \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.187851 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-host\") on node \"crc\" DevicePath \"\"" Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.200946 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-kube-api-access-d4gnc" (OuterVolumeSpecName: "kube-api-access-d4gnc") pod "5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" (UID: "5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0"). InnerVolumeSpecName "kube-api-access-d4gnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.289786 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4gnc\" (UniqueName: \"kubernetes.io/projected/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-kube-api-access-d4gnc\") on node \"crc\" DevicePath \"\"" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.021298 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="765f5a2c75ce4bde49578f8726b2b1797d29bab8cd8ea6d1a31e4bd0cb2c136e" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.021341 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.391473 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rk4k5/crc-debug-df7rh"] Jan 24 07:49:14 crc kubenswrapper[4675]: E0124 07:49:14.391930 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" containerName="container-00" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.391945 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" containerName="container-00" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.392154 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" containerName="container-00" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.392817 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.394669 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rk4k5"/"default-dockercfg-grs5v" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.511019 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsj5b\" (UniqueName: \"kubernetes.io/projected/f989d7ff-d822-48c0-9434-7c435c1a3897-kube-api-access-nsj5b\") pod \"crc-debug-df7rh\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.511349 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f989d7ff-d822-48c0-9434-7c435c1a3897-host\") pod \"crc-debug-df7rh\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.612459 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsj5b\" (UniqueName: \"kubernetes.io/projected/f989d7ff-d822-48c0-9434-7c435c1a3897-kube-api-access-nsj5b\") pod \"crc-debug-df7rh\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.612511 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f989d7ff-d822-48c0-9434-7c435c1a3897-host\") pod \"crc-debug-df7rh\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.612694 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f989d7ff-d822-48c0-9434-7c435c1a3897-host\") pod \"crc-debug-df7rh\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.633568 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsj5b\" (UniqueName: \"kubernetes.io/projected/f989d7ff-d822-48c0-9434-7c435c1a3897-kube-api-access-nsj5b\") pod \"crc-debug-df7rh\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.708582 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.951275 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" path="/var/lib/kubelet/pods/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0/volumes" Jan 24 07:49:15 crc kubenswrapper[4675]: I0124 07:49:15.031102 4675 generic.go:334] "Generic (PLEG): container finished" podID="f989d7ff-d822-48c0-9434-7c435c1a3897" containerID="6147927f4957f1dc3488aaede237f3f7588cb12024d74ec9e368eed77317985e" exitCode=1 Jan 24 07:49:15 crc kubenswrapper[4675]: I0124 07:49:15.031142 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/crc-debug-df7rh" event={"ID":"f989d7ff-d822-48c0-9434-7c435c1a3897","Type":"ContainerDied","Data":"6147927f4957f1dc3488aaede237f3f7588cb12024d74ec9e368eed77317985e"} Jan 24 07:49:15 crc kubenswrapper[4675]: I0124 07:49:15.031190 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/crc-debug-df7rh" event={"ID":"f989d7ff-d822-48c0-9434-7c435c1a3897","Type":"ContainerStarted","Data":"adaf49d2751b18eb74cc36ca9f72bb613d4462194ef46032b228ad790f3431b5"} Jan 24 07:49:15 crc kubenswrapper[4675]: I0124 07:49:15.064994 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rk4k5/crc-debug-df7rh"] Jan 24 07:49:15 crc kubenswrapper[4675]: I0124 07:49:15.076160 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rk4k5/crc-debug-df7rh"] Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.159475 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.255122 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f989d7ff-d822-48c0-9434-7c435c1a3897-host\") pod \"f989d7ff-d822-48c0-9434-7c435c1a3897\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.255208 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsj5b\" (UniqueName: \"kubernetes.io/projected/f989d7ff-d822-48c0-9434-7c435c1a3897-kube-api-access-nsj5b\") pod \"f989d7ff-d822-48c0-9434-7c435c1a3897\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.256565 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f989d7ff-d822-48c0-9434-7c435c1a3897-host" (OuterVolumeSpecName: "host") pod "f989d7ff-d822-48c0-9434-7c435c1a3897" (UID: "f989d7ff-d822-48c0-9434-7c435c1a3897"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.269926 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f989d7ff-d822-48c0-9434-7c435c1a3897-kube-api-access-nsj5b" (OuterVolumeSpecName: "kube-api-access-nsj5b") pod "f989d7ff-d822-48c0-9434-7c435c1a3897" (UID: "f989d7ff-d822-48c0-9434-7c435c1a3897"). InnerVolumeSpecName "kube-api-access-nsj5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.356804 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f989d7ff-d822-48c0-9434-7c435c1a3897-host\") on node \"crc\" DevicePath \"\"" Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.356835 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsj5b\" (UniqueName: \"kubernetes.io/projected/f989d7ff-d822-48c0-9434-7c435c1a3897-kube-api-access-nsj5b\") on node \"crc\" DevicePath \"\"" Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.952576 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f989d7ff-d822-48c0-9434-7c435c1a3897" path="/var/lib/kubelet/pods/f989d7ff-d822-48c0-9434-7c435c1a3897/volumes" Jan 24 07:49:17 crc kubenswrapper[4675]: I0124 07:49:17.055212 4675 scope.go:117] "RemoveContainer" containerID="6147927f4957f1dc3488aaede237f3f7588cb12024d74ec9e368eed77317985e" Jan 24 07:49:17 crc kubenswrapper[4675]: I0124 07:49:17.055761 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:38 crc kubenswrapper[4675]: I0124 07:49:38.629761 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:49:38 crc kubenswrapper[4675]: I0124 07:49:38.630295 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.318945 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79656b6bf8-nwng8_17e03478-4656-43f8-8d7b-5dfb1ff160a1/barbican-api/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.454154 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79656b6bf8-nwng8_17e03478-4656-43f8-8d7b-5dfb1ff160a1/barbican-api-log/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.509349 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67c5df6588-xqvmq_3c5d104c-9f26-49fd-bec5-f62a53503d42/barbican-keystone-listener/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.565997 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67c5df6588-xqvmq_3c5d104c-9f26-49fd-bec5-f62a53503d42/barbican-keystone-listener-log/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.769502 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9646bdbd7-ww6xm_be4ebeb1-6268-4363-948f-8f9aa8f61fe9/barbican-worker/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.806582 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9646bdbd7-ww6xm_be4ebeb1-6268-4363-948f-8f9aa8f61fe9/barbican-worker-log/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.957571 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw_e9b8f08b-6ece-4b46-86c0-9c353d61c50c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.993870 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/ceilometer-central-agent/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.063055 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/ceilometer-notification-agent/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.201806 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/sg-core/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.225691 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/proxy-httpd/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.313269 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f870976e-13a5-4226-9eff-18a3244582e8/cinder-api/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.392361 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f870976e-13a5-4226-9eff-18a3244582e8/cinder-api-log/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.546406 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_31cacad0-4d32-4300-8bdc-bbf15fcd77ac/probe/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.608674 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_31cacad0-4d32-4300-8bdc-bbf15fcd77ac/cinder-scheduler/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.774786 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-td879_bc52fac9-92d8-4555-b942-5f0dcb4bf6f3/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.864212 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm_eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.003429 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-4qjxm_4a4ca579-5173-42d0-8dd8-d287df832c44/init/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.193120 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-4qjxm_4a4ca579-5173-42d0-8dd8-d287df832c44/init/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.227543 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-4qjxm_4a4ca579-5173-42d0-8dd8-d287df832c44/dnsmasq-dns/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.233126 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-49lhh_09d123a4-63c4-4269-b4e1-12932baedfd0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.439189 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d0a8fdf4-03fc-4962-8792-6f129d2b00e4/glance-log/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.441105 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d0a8fdf4-03fc-4962-8792-6f129d2b00e4/glance-httpd/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.633862 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d61eafc8-f960-4335-8d26-2d47e8c7c039/glance-httpd/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.661034 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d61eafc8-f960-4335-8d26-2d47e8c7c039/glance-log/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.845249 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-656ff794dd-jx8ld_4b7e7730-0a42-48b0-bb7e-da95eb915126/horizon/1.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.896758 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-656ff794dd-jx8ld_4b7e7730-0a42-48b0-bb7e-da95eb915126/horizon/0.log" Jan 24 07:50:07 crc kubenswrapper[4675]: I0124 07:50:07.147539 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh_2d09456f-a230-420b-b288-c0dc3e8a6e22/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:07 crc kubenswrapper[4675]: I0124 07:50:07.224957 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-656ff794dd-jx8ld_4b7e7730-0a42-48b0-bb7e-da95eb915126/horizon-log/0.log" Jan 24 07:50:07 crc kubenswrapper[4675]: I0124 07:50:07.316070 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vbvgv_27ad7637-701b-43e1-8440-0fd32522fc56/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:07 crc kubenswrapper[4675]: I0124 07:50:07.502202 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5dbffd67c8-k8gzb_405f0f26-61a4-4420-a147-43d7b86ebb8e/keystone-api/0.log" Jan 24 07:50:07 crc kubenswrapper[4675]: I0124 07:50:07.545054 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_b742b344-80ea-48bf-bd28-8f1be00b4442/kube-state-metrics/0.log" Jan 24 07:50:07 crc kubenswrapper[4675]: I0124 07:50:07.727045 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq_d457c71e-ef41-4bf9-a59b-b3221df26b41/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.021245 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77c5f475df-4zndh_4dd8da22-c828-48e1-bbab-d7360beb8d9f/neutron-api/0.log" Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.078201 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77c5f475df-4zndh_4dd8da22-c828-48e1-bbab-d7360beb8d9f/neutron-httpd/0.log" Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.359214 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g_388e10c7-15e4-40d5-94ed-5c6612f7fbfe/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.630278 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.630326 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.891884 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_95a0d5c5-541f-4a43-9d20-22264dca21d1/nova-api-api/0.log" Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.902198 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_95a0d5c5-541f-4a43-9d20-22264dca21d1/nova-api-log/0.log" Jan 24 07:50:09 crc kubenswrapper[4675]: I0124 07:50:09.002034 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_a3a43606-cba1-4fca-93c4-a1937ee449cc/nova-cell0-conductor-conductor/0.log" Jan 24 07:50:09 crc kubenswrapper[4675]: I0124 07:50:09.211870 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8afe3d83-5678-47e9-be7d-dfbf50fa5bc9/nova-cell1-conductor-conductor/0.log" Jan 24 07:50:09 crc kubenswrapper[4675]: I0124 07:50:09.312902 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a485ae65-6b4d-4cc6-9623-dc0b722f47e8/nova-cell1-novncproxy-novncproxy/0.log" Jan 24 07:50:09 crc kubenswrapper[4675]: I0124 07:50:09.642684 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d55e1385-c016-4bb9-afc2-a070f5a88241/nova-metadata-log/0.log" Jan 24 07:50:09 crc kubenswrapper[4675]: I0124 07:50:09.646781 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-k8fng_f4024f70-df50-442c-bcd5-c599d978277c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.033689 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_361b5d16-2808-40ad-88a0-f07fd4c33e3e/nova-scheduler-scheduler/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.064544 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e189b411-9dd6-496f-a001-41bc90c3fe00/mysql-bootstrap/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.242263 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e189b411-9dd6-496f-a001-41bc90c3fe00/mysql-bootstrap/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.247274 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e189b411-9dd6-496f-a001-41bc90c3fe00/galera/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.451608 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_009254f3-9d76-4d89-8e35-d2b4c4be0da8/mysql-bootstrap/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.617592 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d55e1385-c016-4bb9-afc2-a070f5a88241/nova-metadata-metadata/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.766187 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_009254f3-9d76-4d89-8e35-d2b4c4be0da8/mysql-bootstrap/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.860898 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_009254f3-9d76-4d89-8e35-d2b4c4be0da8/galera/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.881052 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb/openstackclient/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.056641 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2x2kb_b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1/ovn-controller/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.214409 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b7pft_1e0062ff-7e89-4c55-8796-de1c9e311dd2/openstack-network-exporter/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.314491 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovsdb-server-init/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.524342 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovsdb-server-init/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.527996 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovsdb-server/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.545043 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovs-vswitchd/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.761670 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_daf62505-a3ad-4c12-a520-4d412d26a71c/openstack-network-exporter/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.814327 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-7vbln_3e407880-d27a-4aa2-bb81-a87bb20ffcf1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.923362 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_daf62505-a3ad-4c12-a520-4d412d26a71c/ovn-northd/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.054899 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_19fa54da-8a94-427d-b8c6-0881657d3324/openstack-network-exporter/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.121450 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_19fa54da-8a94-427d-b8c6-0881657d3324/ovsdbserver-nb/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.304323 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1d973fa-2671-49fe-82f1-1862aa70d784/ovsdbserver-sb/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.365011 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1d973fa-2671-49fe-82f1-1862aa70d784/openstack-network-exporter/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.580424 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c48f89996-b4jz4_bf1f40fb-34b7-494b-bed1-b851a073ac8c/placement-api/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.590812 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c48f89996-b4jz4_bf1f40fb-34b7-494b-bed1-b851a073ac8c/placement-log/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.741573 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7c146e5e-4709-4401-a5eb-522609573260/setup-container/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.972773 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7c146e5e-4709-4401-a5eb-522609573260/rabbitmq/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.051112 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7c146e5e-4709-4401-a5eb-522609573260/setup-container/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.094953 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3fd85775-321f-4647-95b6-773ec82811e0/setup-container/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.189621 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3fd85775-321f-4647-95b6-773ec82811e0/setup-container/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.293126 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3fd85775-321f-4647-95b6-773ec82811e0/rabbitmq/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.460327 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw_7b1b0570-d3a2-4029-bcf8-f41144ea0f06/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.576468 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zd8ln_55150857-7da2-4609-84be-9cbaa28141ed/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.881863 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q_774fb762-6506-4e0c-9732-9208f7802057/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.111934 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ln8x2_3bc4008d-f8c6-4745-b524-d6136632cbfb/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.165215 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wq6r9_191f15b0-8a3b-4dc4-bc49-9003c61619bf/ssh-known-hosts-edpm-deployment/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.483490 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5875964765-b68mp_fa1443f8-8586-4757-9637-378c7c88787d/proxy-server/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.548021 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5875964765-b68mp_fa1443f8-8586-4757-9637-378c7c88787d/proxy-httpd/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.601894 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-sz46b_57da3a87-eeeb-47c8-b1bd-6a160dd81ff8/swift-ring-rebalance/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.792473 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wsg5j"] Jan 24 07:50:14 crc kubenswrapper[4675]: E0124 07:50:14.792869 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f989d7ff-d822-48c0-9434-7c435c1a3897" containerName="container-00" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.792886 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f989d7ff-d822-48c0-9434-7c435c1a3897" containerName="container-00" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.793071 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f989d7ff-d822-48c0-9434-7c435c1a3897" containerName="container-00" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.794389 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.808408 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsg5j"] Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.907510 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-auditor/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.960823 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-reaper/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.966255 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-utilities\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.966305 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpsdk\" (UniqueName: \"kubernetes.io/projected/283b234b-97b3-4128-b75e-c07e0dd22cd8-kube-api-access-fpsdk\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.966337 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-catalog-content\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.048145 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-replicator/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.067427 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpsdk\" (UniqueName: \"kubernetes.io/projected/283b234b-97b3-4128-b75e-c07e0dd22cd8-kube-api-access-fpsdk\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.067491 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-catalog-content\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.067612 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-utilities\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.068035 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-utilities\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.068488 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-catalog-content\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.104710 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpsdk\" (UniqueName: \"kubernetes.io/projected/283b234b-97b3-4128-b75e-c07e0dd22cd8-kube-api-access-fpsdk\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.130636 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.182640 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-server/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.230667 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-auditor/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.510939 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-server/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.513623 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-replicator/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.619374 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-auditor/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.625221 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-updater/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.655512 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsg5j"] Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.796145 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-replicator/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.823865 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-expirer/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.902094 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-server/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.980932 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/rsync/0.log" Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.025481 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-updater/0.log" Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.124754 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/swift-recon-cron/0.log" Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.362934 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx_e47d7738-3361-429e-90f9-02dee4f0052e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.497879 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0e021dd7-397f-4546-a38b-c8c13a1c830d/tempest-tests-tempest-tests-runner/0.log" Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.555421 4675 generic.go:334] "Generic (PLEG): container finished" podID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerID="380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93" exitCode=0 Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.555464 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerDied","Data":"380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93"} Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.555490 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerStarted","Data":"de12482b41e0e33cceec8470bd0aa79f9be6258755a8988e76fc4bad5d7ee95c"} Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.572253 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6051ab9a-5c43-4757-a1ff-3f199dee0a79/test-operator-logs-container/0.log" Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.791889 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc_e9c128cc-910c-4ef2-9b56-14adf4d264b3/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.390468 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r877k"] Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.392805 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.440127 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r877k"] Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.520392 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-utilities\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.520658 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk6ct\" (UniqueName: \"kubernetes.io/projected/b1396ef9-9c28-4198-a055-b132c7205bff-kube-api-access-lk6ct\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.520756 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-catalog-content\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.569736 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerStarted","Data":"ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76"} Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.621970 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-utilities\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.622065 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk6ct\" (UniqueName: \"kubernetes.io/projected/b1396ef9-9c28-4198-a055-b132c7205bff-kube-api-access-lk6ct\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.622084 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-catalog-content\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.622465 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-catalog-content\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.622696 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-utilities\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.648366 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk6ct\" (UniqueName: \"kubernetes.io/projected/b1396ef9-9c28-4198-a055-b132c7205bff-kube-api-access-lk6ct\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.719504 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:18 crc kubenswrapper[4675]: I0124 07:50:18.307521 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r877k"] Jan 24 07:50:18 crc kubenswrapper[4675]: I0124 07:50:18.581598 4675 generic.go:334] "Generic (PLEG): container finished" podID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerID="ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76" exitCode=0 Jan 24 07:50:18 crc kubenswrapper[4675]: I0124 07:50:18.581953 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerDied","Data":"ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76"} Jan 24 07:50:18 crc kubenswrapper[4675]: I0124 07:50:18.590636 4675 generic.go:334] "Generic (PLEG): container finished" podID="b1396ef9-9c28-4198-a055-b132c7205bff" containerID="9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8" exitCode=0 Jan 24 07:50:18 crc kubenswrapper[4675]: I0124 07:50:18.590677 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerDied","Data":"9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8"} Jan 24 07:50:18 crc kubenswrapper[4675]: I0124 07:50:18.590701 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerStarted","Data":"e2a05fb6cdb9cadfc028e071a632b920a8f3f839f100de17bd3aba7d97eb8e93"} Jan 24 07:50:19 crc kubenswrapper[4675]: I0124 07:50:19.601451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerStarted","Data":"4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb"} Jan 24 07:50:19 crc kubenswrapper[4675]: I0124 07:50:19.605959 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerStarted","Data":"c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f"} Jan 24 07:50:21 crc kubenswrapper[4675]: I0124 07:50:21.621020 4675 generic.go:334] "Generic (PLEG): container finished" podID="b1396ef9-9c28-4198-a055-b132c7205bff" containerID="4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb" exitCode=0 Jan 24 07:50:21 crc kubenswrapper[4675]: I0124 07:50:21.621466 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerDied","Data":"4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb"} Jan 24 07:50:21 crc kubenswrapper[4675]: I0124 07:50:21.642293 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wsg5j" podStartSLOduration=5.150707552 podStartE2EDuration="7.642278235s" podCreationTimestamp="2026-01-24 07:50:14 +0000 UTC" firstStartedPulling="2026-01-24 07:50:16.557207557 +0000 UTC m=+3417.853312780" lastFinishedPulling="2026-01-24 07:50:19.04877824 +0000 UTC m=+3420.344883463" observedRunningTime="2026-01-24 07:50:19.660138328 +0000 UTC m=+3420.956243551" watchObservedRunningTime="2026-01-24 07:50:21.642278235 +0000 UTC m=+3422.938383458" Jan 24 07:50:22 crc kubenswrapper[4675]: I0124 07:50:22.640848 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerStarted","Data":"353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22"} Jan 24 07:50:22 crc kubenswrapper[4675]: I0124 07:50:22.665746 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r877k" podStartSLOduration=2.072120666 podStartE2EDuration="5.665732235s" podCreationTimestamp="2026-01-24 07:50:17 +0000 UTC" firstStartedPulling="2026-01-24 07:50:18.592238679 +0000 UTC m=+3419.888343902" lastFinishedPulling="2026-01-24 07:50:22.185850248 +0000 UTC m=+3423.481955471" observedRunningTime="2026-01-24 07:50:22.66306319 +0000 UTC m=+3423.959168413" watchObservedRunningTime="2026-01-24 07:50:22.665732235 +0000 UTC m=+3423.961837458" Jan 24 07:50:25 crc kubenswrapper[4675]: I0124 07:50:25.146995 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:25 crc kubenswrapper[4675]: I0124 07:50:25.147274 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:25 crc kubenswrapper[4675]: I0124 07:50:25.221980 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:25 crc kubenswrapper[4675]: I0124 07:50:25.727897 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:26 crc kubenswrapper[4675]: I0124 07:50:26.390179 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsg5j"] Jan 24 07:50:27 crc kubenswrapper[4675]: I0124 07:50:27.712549 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wsg5j" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="registry-server" containerID="cri-o://c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f" gracePeriod=2 Jan 24 07:50:27 crc kubenswrapper[4675]: I0124 07:50:27.720207 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:27 crc kubenswrapper[4675]: I0124 07:50:27.721394 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:27 crc kubenswrapper[4675]: I0124 07:50:27.787574 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.199833 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.251791 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpsdk\" (UniqueName: \"kubernetes.io/projected/283b234b-97b3-4128-b75e-c07e0dd22cd8-kube-api-access-fpsdk\") pod \"283b234b-97b3-4128-b75e-c07e0dd22cd8\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.251981 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-utilities\") pod \"283b234b-97b3-4128-b75e-c07e0dd22cd8\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.252061 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-catalog-content\") pod \"283b234b-97b3-4128-b75e-c07e0dd22cd8\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.253309 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-utilities" (OuterVolumeSpecName: "utilities") pod "283b234b-97b3-4128-b75e-c07e0dd22cd8" (UID: "283b234b-97b3-4128-b75e-c07e0dd22cd8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.275059 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283b234b-97b3-4128-b75e-c07e0dd22cd8-kube-api-access-fpsdk" (OuterVolumeSpecName: "kube-api-access-fpsdk") pod "283b234b-97b3-4128-b75e-c07e0dd22cd8" (UID: "283b234b-97b3-4128-b75e-c07e0dd22cd8"). InnerVolumeSpecName "kube-api-access-fpsdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.281095 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "283b234b-97b3-4128-b75e-c07e0dd22cd8" (UID: "283b234b-97b3-4128-b75e-c07e0dd22cd8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.353965 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpsdk\" (UniqueName: \"kubernetes.io/projected/283b234b-97b3-4128-b75e-c07e0dd22cd8-kube-api-access-fpsdk\") on node \"crc\" DevicePath \"\"" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.354003 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.354015 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.721164 4675 generic.go:334] "Generic (PLEG): container finished" podID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerID="c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f" exitCode=0 Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.721903 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.722126 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerDied","Data":"c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f"} Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.722159 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerDied","Data":"de12482b41e0e33cceec8470bd0aa79f9be6258755a8988e76fc4bad5d7ee95c"} Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.722176 4675 scope.go:117] "RemoveContainer" containerID="c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.752110 4675 scope.go:117] "RemoveContainer" containerID="ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.788979 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsg5j"] Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.790152 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.795768 4675 scope.go:117] "RemoveContainer" containerID="380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.809212 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsg5j"] Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.868532 4675 scope.go:117] "RemoveContainer" containerID="c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f" Jan 24 07:50:28 crc kubenswrapper[4675]: E0124 07:50:28.869364 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f\": container with ID starting with c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f not found: ID does not exist" containerID="c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.869396 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f"} err="failed to get container status \"c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f\": rpc error: code = NotFound desc = could not find container \"c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f\": container with ID starting with c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f not found: ID does not exist" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.869416 4675 scope.go:117] "RemoveContainer" containerID="ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76" Jan 24 07:50:28 crc kubenswrapper[4675]: E0124 07:50:28.869657 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76\": container with ID starting with ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76 not found: ID does not exist" containerID="ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.869675 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76"} err="failed to get container status \"ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76\": rpc error: code = NotFound desc = could not find container \"ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76\": container with ID starting with ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76 not found: ID does not exist" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.869689 4675 scope.go:117] "RemoveContainer" containerID="380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93" Jan 24 07:50:28 crc kubenswrapper[4675]: E0124 07:50:28.869950 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93\": container with ID starting with 380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93 not found: ID does not exist" containerID="380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.870008 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93"} err="failed to get container status \"380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93\": rpc error: code = NotFound desc = could not find container \"380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93\": container with ID starting with 380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93 not found: ID does not exist" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.954318 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" path="/var/lib/kubelet/pods/283b234b-97b3-4128-b75e-c07e0dd22cd8/volumes" Jan 24 07:50:30 crc kubenswrapper[4675]: I0124 07:50:30.713903 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b2446e52-3d97-46f2-ac99-4bb1af82d302/memcached/0.log" Jan 24 07:50:31 crc kubenswrapper[4675]: I0124 07:50:31.185045 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r877k"] Jan 24 07:50:31 crc kubenswrapper[4675]: I0124 07:50:31.742070 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r877k" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="registry-server" containerID="cri-o://353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22" gracePeriod=2 Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.230269 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.317522 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk6ct\" (UniqueName: \"kubernetes.io/projected/b1396ef9-9c28-4198-a055-b132c7205bff-kube-api-access-lk6ct\") pod \"b1396ef9-9c28-4198-a055-b132c7205bff\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.317881 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-utilities\") pod \"b1396ef9-9c28-4198-a055-b132c7205bff\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.317909 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-catalog-content\") pod \"b1396ef9-9c28-4198-a055-b132c7205bff\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.319592 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-utilities" (OuterVolumeSpecName: "utilities") pod "b1396ef9-9c28-4198-a055-b132c7205bff" (UID: "b1396ef9-9c28-4198-a055-b132c7205bff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.346066 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1396ef9-9c28-4198-a055-b132c7205bff-kube-api-access-lk6ct" (OuterVolumeSpecName: "kube-api-access-lk6ct") pod "b1396ef9-9c28-4198-a055-b132c7205bff" (UID: "b1396ef9-9c28-4198-a055-b132c7205bff"). InnerVolumeSpecName "kube-api-access-lk6ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.379156 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1396ef9-9c28-4198-a055-b132c7205bff" (UID: "b1396ef9-9c28-4198-a055-b132c7205bff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.419900 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk6ct\" (UniqueName: \"kubernetes.io/projected/b1396ef9-9c28-4198-a055-b132c7205bff-kube-api-access-lk6ct\") on node \"crc\" DevicePath \"\"" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.419947 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.419958 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.751768 4675 generic.go:334] "Generic (PLEG): container finished" podID="b1396ef9-9c28-4198-a055-b132c7205bff" containerID="353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22" exitCode=0 Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.751809 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.751819 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerDied","Data":"353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22"} Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.751868 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerDied","Data":"e2a05fb6cdb9cadfc028e071a632b920a8f3f839f100de17bd3aba7d97eb8e93"} Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.751888 4675 scope.go:117] "RemoveContainer" containerID="353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.773331 4675 scope.go:117] "RemoveContainer" containerID="4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.786169 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r877k"] Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.809577 4675 scope.go:117] "RemoveContainer" containerID="9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.818175 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r877k"] Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.849636 4675 scope.go:117] "RemoveContainer" containerID="353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22" Jan 24 07:50:32 crc kubenswrapper[4675]: E0124 07:50:32.864102 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22\": container with ID starting with 353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22 not found: ID does not exist" containerID="353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.864144 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22"} err="failed to get container status \"353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22\": rpc error: code = NotFound desc = could not find container \"353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22\": container with ID starting with 353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22 not found: ID does not exist" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.864180 4675 scope.go:117] "RemoveContainer" containerID="4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb" Jan 24 07:50:32 crc kubenswrapper[4675]: E0124 07:50:32.864409 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb\": container with ID starting with 4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb not found: ID does not exist" containerID="4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.864429 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb"} err="failed to get container status \"4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb\": rpc error: code = NotFound desc = could not find container \"4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb\": container with ID starting with 4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb not found: ID does not exist" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.864442 4675 scope.go:117] "RemoveContainer" containerID="9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8" Jan 24 07:50:32 crc kubenswrapper[4675]: E0124 07:50:32.864603 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8\": container with ID starting with 9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8 not found: ID does not exist" containerID="9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.864621 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8"} err="failed to get container status \"9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8\": rpc error: code = NotFound desc = could not find container \"9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8\": container with ID starting with 9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8 not found: ID does not exist" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.951743 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" path="/var/lib/kubelet/pods/b1396ef9-9c28-4198-a055-b132c7205bff/volumes" Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.644354 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.644882 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.644924 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.645958 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a6a61085acc256df533325dcd06e983d7d8d92776e231385eb1bb2bd81838c9"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.646027 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://0a6a61085acc256df533325dcd06e983d7d8d92776e231385eb1bb2bd81838c9" gracePeriod=600 Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.805681 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="0a6a61085acc256df533325dcd06e983d7d8d92776e231385eb1bb2bd81838c9" exitCode=0 Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.805782 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"0a6a61085acc256df533325dcd06e983d7d8d92776e231385eb1bb2bd81838c9"} Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.806051 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:50:39 crc kubenswrapper[4675]: I0124 07:50:39.816581 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f"} Jan 24 07:50:49 crc kubenswrapper[4675]: I0124 07:50:49.788831 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-dwbq6_2db25911-f36e-43ae-8f47-b042ec82266e/manager/0.log" Jan 24 07:50:49 crc kubenswrapper[4675]: I0124 07:50:49.975934 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/util/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.167798 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/pull/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.201883 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/util/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.238691 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/pull/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.448896 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/pull/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.475271 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/util/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.537749 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/extract/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.791743 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-6jbwg_b8285f65-9930-4bb9-9e18-b6ffe19f45fb/manager/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.812210 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-79fwx_6003a1f9-ad0e-49f6-8750-6ac2208560cc/manager/0.log" Jan 24 07:50:51 crc kubenswrapper[4675]: I0124 07:50:51.019135 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-thqtz_e7263d16-14c3-4254-821a-cbf99b7cf3e4/manager/0.log" Jan 24 07:50:51 crc kubenswrapper[4675]: I0124 07:50:51.166439 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-mqk98_7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320/manager/0.log" Jan 24 07:50:51 crc kubenswrapper[4675]: I0124 07:50:51.411040 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-67vkh_4aa5aa88-c6f2-4000-9a9d-3b14e23220de/manager/0.log" Jan 24 07:50:51 crc kubenswrapper[4675]: I0124 07:50:51.661951 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-l7jq5_06f423e8-7ba9-497d-a587-cc880d66625b/manager/0.log" Jan 24 07:50:51 crc kubenswrapper[4675]: I0124 07:50:51.726469 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-c5658_743af71f-3542-439c-b3a1-33a7b9ae34f1/manager/0.log" Jan 24 07:50:51 crc kubenswrapper[4675]: I0124 07:50:51.894287 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-bqd4q_5b3a45f7-a1eb-44a2-b0be-7c77b190d50c/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.028609 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-6lq96_e09ce8a8-a2a4-4fec-b36d-a97910aced0f/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.152570 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-vjf84_7660e41e-527d-4806-8ef3-6dee25fa72c5/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.301405 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-dzvlp_724ac56d-9f4e-40f9-98f7-3a65c807f89c/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.472597 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-4lmvf_6f867475-7eee-431c-97ee-12ae861193c7/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.573965 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-q6qn9_bdc167a3-9335-4b3d-9696-a1d03b9ae618/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.723446 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk_ac97fbc7-211e-41e3-8e16-aff853a7c9f4/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.932866 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-d498c57f9-4vbdv_fc267189-e8ca-412c-bb9a-6b251571a514/operator/0.log" Jan 24 07:50:53 crc kubenswrapper[4675]: I0124 07:50:53.246893 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-d4hsh_954076ba-3e6f-4e5b-9b3f-4637840d5021/registry-server/0.log" Jan 24 07:50:53 crc kubenswrapper[4675]: I0124 07:50:53.601026 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-n4kll_a1041f21-5d7d-4b17-84ff-ee83332e604d/manager/0.log" Jan 24 07:50:53 crc kubenswrapper[4675]: I0124 07:50:53.792623 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-l5hrz_20b0ee18-4569-4428-956f-d8795904f368/manager/0.log" Jan 24 07:50:53 crc kubenswrapper[4675]: I0124 07:50:53.962453 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9cmpf_b7d1f492-700c-492e-a1c2-eae496f0133c/operator/0.log" Jan 24 07:50:54 crc kubenswrapper[4675]: I0124 07:50:54.048498 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-688fccdd58-dkxf7_d94b056e-c445-4033-8d02-a794dae4b671/manager/0.log" Jan 24 07:50:54 crc kubenswrapper[4675]: I0124 07:50:54.248213 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7d55b89685-9rvmf_4bfb9011-058d-494d-96ce-a39202c7b851/manager/0.log" Jan 24 07:50:54 crc kubenswrapper[4675]: I0124 07:50:54.303174 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-n6jmw_47e89f8e-f652-43a1-a36a-2db184700f3e/manager/0.log" Jan 24 07:50:54 crc kubenswrapper[4675]: I0124 07:50:54.517303 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-k7crk_fae349a1-6c08-4424-abe2-42dddccd55cc/manager/0.log" Jan 24 07:50:54 crc kubenswrapper[4675]: I0124 07:50:54.552006 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-9fkjr_f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480/manager/0.log" Jan 24 07:51:16 crc kubenswrapper[4675]: I0124 07:51:16.023493 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kdjm5_e08de50b-8092-4f29-b2a8-a391b4778142/control-plane-machine-set-operator/0.log" Jan 24 07:51:16 crc kubenswrapper[4675]: I0124 07:51:16.335154 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-577lm_bba258ca-d05a-417e-8a91-73e603062c20/kube-rbac-proxy/0.log" Jan 24 07:51:16 crc kubenswrapper[4675]: I0124 07:51:16.386980 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-577lm_bba258ca-d05a-417e-8a91-73e603062c20/machine-api-operator/0.log" Jan 24 07:51:30 crc kubenswrapper[4675]: I0124 07:51:30.965834 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-gt7xw_f9d3eaae-49ca-400c-a277-bdbad7f8125a/cert-manager-controller/0.log" Jan 24 07:51:31 crc kubenswrapper[4675]: I0124 07:51:31.151023 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6kp8k_99008be6-effb-4dc7-a761-ee291c03f093/cert-manager-cainjector/0.log" Jan 24 07:51:31 crc kubenswrapper[4675]: I0124 07:51:31.215504 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-lthpk_261785a7-b436-4597-a36b-473d27769006/cert-manager-webhook/0.log" Jan 24 07:51:44 crc kubenswrapper[4675]: I0124 07:51:44.342111 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-szblh_b289d862-4851-4f88-9a5b-4bed8cd70bd8/nmstate-console-plugin/0.log" Jan 24 07:51:44 crc kubenswrapper[4675]: I0124 07:51:44.588084 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ljst6_8c82b668-f857-4de6-a938-333a7e44591f/nmstate-handler/0.log" Jan 24 07:51:44 crc kubenswrapper[4675]: I0124 07:51:44.647063 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c56d8_56a6d660-7a53-4b25-b4e4-3d3f97a67430/kube-rbac-proxy/0.log" Jan 24 07:51:44 crc kubenswrapper[4675]: I0124 07:51:44.676963 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c56d8_56a6d660-7a53-4b25-b4e4-3d3f97a67430/nmstate-metrics/0.log" Jan 24 07:51:44 crc kubenswrapper[4675]: I0124 07:51:44.877790 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-dm24p_b344cabd-3dd6-4691-990b-045aaf4c622f/nmstate-operator/0.log" Jan 24 07:51:44 crc kubenswrapper[4675]: I0124 07:51:44.924431 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-77dfm_469eb31f-c261-4d7f-8a12-c10ed969bd55/nmstate-webhook/0.log" Jan 24 07:52:15 crc kubenswrapper[4675]: I0124 07:52:15.559019 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-c4k6t_af8e6625-69ed-4901-9577-65cc6fafe0d1/controller/0.log" Jan 24 07:52:15 crc kubenswrapper[4675]: I0124 07:52:15.616972 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-c4k6t_af8e6625-69ed-4901-9577-65cc6fafe0d1/kube-rbac-proxy/0.log" Jan 24 07:52:15 crc kubenswrapper[4675]: I0124 07:52:15.925865 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.206041 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.210260 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.241729 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.264136 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.431861 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.477588 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.516397 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.539453 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.717592 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.739155 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.740298 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.821768 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/controller/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.967013 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/frr-metrics/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.142450 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/kube-rbac-proxy/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.217920 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/reloader/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.244673 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/kube-rbac-proxy-frr/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.521164 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-skd24_032ac1eb-bb7f-4f94-b9ad-4d710032f3af/frr-k8s-webhook-server/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.730117 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-57d867674d-x4v6v_0cf0ee32-c416-4629-a441-268fbe054062/manager/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.873093 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/frr/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.889980 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f499b46f-tntmc_893cbc8e-86ae-4910-8693-061301da0ba6/webhook-server/0.log" Jan 24 07:52:18 crc kubenswrapper[4675]: I0124 07:52:18.180114 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5bpc7_21ad12ca-5157-4c19-9e8c-34fbe8fa9b96/kube-rbac-proxy/0.log" Jan 24 07:52:18 crc kubenswrapper[4675]: I0124 07:52:18.391527 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5bpc7_21ad12ca-5157-4c19-9e8c-34fbe8fa9b96/speaker/0.log" Jan 24 07:52:33 crc kubenswrapper[4675]: I0124 07:52:33.445852 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/util/0.log" Jan 24 07:52:33 crc kubenswrapper[4675]: I0124 07:52:33.627048 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/util/0.log" Jan 24 07:52:33 crc kubenswrapper[4675]: I0124 07:52:33.665029 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/pull/0.log" Jan 24 07:52:33 crc kubenswrapper[4675]: I0124 07:52:33.680131 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/pull/0.log" Jan 24 07:52:33 crc kubenswrapper[4675]: I0124 07:52:33.996398 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/util/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.023132 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/extract/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.027626 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/pull/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.221412 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/util/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.488398 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/util/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.493010 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/pull/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.540619 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/pull/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.786962 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/pull/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.790426 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/util/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.881429 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/extract/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.004900 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-utilities/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.149292 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-utilities/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.207981 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-content/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.238939 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-content/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.395344 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-utilities/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.412424 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-content/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.682632 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-utilities/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.805618 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/registry-server/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.916498 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-utilities/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.922912 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-content/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.969435 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-content/0.log" Jan 24 07:52:36 crc kubenswrapper[4675]: I0124 07:52:36.256447 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-utilities/0.log" Jan 24 07:52:36 crc kubenswrapper[4675]: I0124 07:52:36.330522 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-content/0.log" Jan 24 07:52:36 crc kubenswrapper[4675]: I0124 07:52:36.598206 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-9cx7r_83c80cb7-74c3-417a-8d8e-54cdcf640b5b/marketplace-operator/0.log" Jan 24 07:52:36 crc kubenswrapper[4675]: I0124 07:52:36.890809 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/registry-server/0.log" Jan 24 07:52:36 crc kubenswrapper[4675]: I0124 07:52:36.953126 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-utilities/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.069132 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-content/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.114074 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-utilities/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.119655 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-content/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.373780 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-utilities/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.427751 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/registry-server/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.440752 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-content/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.582917 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-utilities/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.813450 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-utilities/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.826564 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-content/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.875346 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-content/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.976212 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-utilities/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.987786 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-content/0.log" Jan 24 07:52:38 crc kubenswrapper[4675]: I0124 07:52:38.355240 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/registry-server/0.log" Jan 24 07:52:38 crc kubenswrapper[4675]: I0124 07:52:38.629901 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:52:38 crc kubenswrapper[4675]: I0124 07:52:38.629948 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:53:08 crc kubenswrapper[4675]: I0124 07:53:08.631072 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:53:08 crc kubenswrapper[4675]: I0124 07:53:08.631687 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:53:38 crc kubenswrapper[4675]: I0124 07:53:38.630416 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:53:38 crc kubenswrapper[4675]: I0124 07:53:38.631036 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:53:38 crc kubenswrapper[4675]: I0124 07:53:38.631106 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:53:38 crc kubenswrapper[4675]: I0124 07:53:38.632222 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:53:38 crc kubenswrapper[4675]: I0124 07:53:38.632301 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" gracePeriod=600 Jan 24 07:53:38 crc kubenswrapper[4675]: E0124 07:53:38.767615 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:53:39 crc kubenswrapper[4675]: I0124 07:53:39.516235 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" exitCode=0 Jan 24 07:53:39 crc kubenswrapper[4675]: I0124 07:53:39.516453 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f"} Jan 24 07:53:39 crc kubenswrapper[4675]: I0124 07:53:39.516489 4675 scope.go:117] "RemoveContainer" containerID="0a6a61085acc256df533325dcd06e983d7d8d92776e231385eb1bb2bd81838c9" Jan 24 07:53:39 crc kubenswrapper[4675]: I0124 07:53:39.517122 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:53:39 crc kubenswrapper[4675]: E0124 07:53:39.517473 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.078552 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j8qd6"] Jan 24 07:53:51 crc kubenswrapper[4675]: E0124 07:53:51.079562 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="extract-utilities" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079579 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="extract-utilities" Jan 24 07:53:51 crc kubenswrapper[4675]: E0124 07:53:51.079600 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="extract-content" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079609 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="extract-content" Jan 24 07:53:51 crc kubenswrapper[4675]: E0124 07:53:51.079622 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="extract-utilities" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079631 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="extract-utilities" Jan 24 07:53:51 crc kubenswrapper[4675]: E0124 07:53:51.079649 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="registry-server" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079659 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="registry-server" Jan 24 07:53:51 crc kubenswrapper[4675]: E0124 07:53:51.079679 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="extract-content" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079688 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="extract-content" Jan 24 07:53:51 crc kubenswrapper[4675]: E0124 07:53:51.079708 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="registry-server" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079734 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="registry-server" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079989 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="registry-server" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.080007 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="registry-server" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.081744 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.107423 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j8qd6"] Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.277421 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-catalog-content\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.277518 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-utilities\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.277570 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jqb7\" (UniqueName: \"kubernetes.io/projected/6e6a9d98-2990-4c73-acc3-48d6623eb351-kube-api-access-9jqb7\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.379269 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-catalog-content\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.379357 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-utilities\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.379407 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jqb7\" (UniqueName: \"kubernetes.io/projected/6e6a9d98-2990-4c73-acc3-48d6623eb351-kube-api-access-9jqb7\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.379932 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-catalog-content\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.379978 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-utilities\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.422624 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jqb7\" (UniqueName: \"kubernetes.io/projected/6e6a9d98-2990-4c73-acc3-48d6623eb351-kube-api-access-9jqb7\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.712917 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:52 crc kubenswrapper[4675]: I0124 07:53:52.216331 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j8qd6"] Jan 24 07:53:52 crc kubenswrapper[4675]: I0124 07:53:52.656113 4675 generic.go:334] "Generic (PLEG): container finished" podID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerID="955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07" exitCode=0 Jan 24 07:53:52 crc kubenswrapper[4675]: I0124 07:53:52.656533 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerDied","Data":"955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07"} Jan 24 07:53:52 crc kubenswrapper[4675]: I0124 07:53:52.656587 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerStarted","Data":"b8ee1a0a1cf2fb484747062f4702e1804aef2c397b833f4b7f7333428f156e56"} Jan 24 07:53:52 crc kubenswrapper[4675]: I0124 07:53:52.662564 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:53:53 crc kubenswrapper[4675]: I0124 07:53:53.672666 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerStarted","Data":"b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca"} Jan 24 07:53:53 crc kubenswrapper[4675]: I0124 07:53:53.942861 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:53:53 crc kubenswrapper[4675]: E0124 07:53:53.943403 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:53:57 crc kubenswrapper[4675]: I0124 07:53:57.726048 4675 generic.go:334] "Generic (PLEG): container finished" podID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerID="b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca" exitCode=0 Jan 24 07:53:57 crc kubenswrapper[4675]: I0124 07:53:57.726153 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerDied","Data":"b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca"} Jan 24 07:53:59 crc kubenswrapper[4675]: I0124 07:53:59.757292 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerStarted","Data":"5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09"} Jan 24 07:54:01 crc kubenswrapper[4675]: I0124 07:54:01.713098 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:54:01 crc kubenswrapper[4675]: I0124 07:54:01.713410 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:54:02 crc kubenswrapper[4675]: I0124 07:54:02.774752 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j8qd6" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="registry-server" probeResult="failure" output=< Jan 24 07:54:02 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:54:02 crc kubenswrapper[4675]: > Jan 24 07:54:07 crc kubenswrapper[4675]: I0124 07:54:07.943347 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:54:07 crc kubenswrapper[4675]: E0124 07:54:07.945227 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:54:11 crc kubenswrapper[4675]: I0124 07:54:11.801145 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:54:11 crc kubenswrapper[4675]: I0124 07:54:11.827914 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j8qd6" podStartSLOduration=14.791716404 podStartE2EDuration="20.82789395s" podCreationTimestamp="2026-01-24 07:53:51 +0000 UTC" firstStartedPulling="2026-01-24 07:53:52.659207418 +0000 UTC m=+3633.955312641" lastFinishedPulling="2026-01-24 07:53:58.695384954 +0000 UTC m=+3639.991490187" observedRunningTime="2026-01-24 07:53:59.788733237 +0000 UTC m=+3641.084838470" watchObservedRunningTime="2026-01-24 07:54:11.82789395 +0000 UTC m=+3653.123999183" Jan 24 07:54:11 crc kubenswrapper[4675]: I0124 07:54:11.880646 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:54:12 crc kubenswrapper[4675]: I0124 07:54:12.049211 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j8qd6"] Jan 24 07:54:12 crc kubenswrapper[4675]: I0124 07:54:12.876607 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j8qd6" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="registry-server" containerID="cri-o://5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09" gracePeriod=2 Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.339348 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.359196 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-catalog-content\") pod \"6e6a9d98-2990-4c73-acc3-48d6623eb351\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.359321 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-utilities\") pod \"6e6a9d98-2990-4c73-acc3-48d6623eb351\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.360413 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-utilities" (OuterVolumeSpecName: "utilities") pod "6e6a9d98-2990-4c73-acc3-48d6623eb351" (UID: "6e6a9d98-2990-4c73-acc3-48d6623eb351"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.360582 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jqb7\" (UniqueName: \"kubernetes.io/projected/6e6a9d98-2990-4c73-acc3-48d6623eb351-kube-api-access-9jqb7\") pod \"6e6a9d98-2990-4c73-acc3-48d6623eb351\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.362585 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.375897 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e6a9d98-2990-4c73-acc3-48d6623eb351-kube-api-access-9jqb7" (OuterVolumeSpecName: "kube-api-access-9jqb7") pod "6e6a9d98-2990-4c73-acc3-48d6623eb351" (UID: "6e6a9d98-2990-4c73-acc3-48d6623eb351"). InnerVolumeSpecName "kube-api-access-9jqb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.464537 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jqb7\" (UniqueName: \"kubernetes.io/projected/6e6a9d98-2990-4c73-acc3-48d6623eb351-kube-api-access-9jqb7\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.482109 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e6a9d98-2990-4c73-acc3-48d6623eb351" (UID: "6e6a9d98-2990-4c73-acc3-48d6623eb351"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.566415 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.890973 4675 generic.go:334] "Generic (PLEG): container finished" podID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerID="5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09" exitCode=0 Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.891060 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.891075 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerDied","Data":"5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09"} Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.891537 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerDied","Data":"b8ee1a0a1cf2fb484747062f4702e1804aef2c397b833f4b7f7333428f156e56"} Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.891602 4675 scope.go:117] "RemoveContainer" containerID="5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.916773 4675 scope.go:117] "RemoveContainer" containerID="b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.936707 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j8qd6"] Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.947041 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j8qd6"] Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.948397 4675 scope.go:117] "RemoveContainer" containerID="955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.997053 4675 scope.go:117] "RemoveContainer" containerID="5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09" Jan 24 07:54:13 crc kubenswrapper[4675]: E0124 07:54:13.997507 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09\": container with ID starting with 5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09 not found: ID does not exist" containerID="5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.997547 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09"} err="failed to get container status \"5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09\": rpc error: code = NotFound desc = could not find container \"5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09\": container with ID starting with 5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09 not found: ID does not exist" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.997570 4675 scope.go:117] "RemoveContainer" containerID="b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca" Jan 24 07:54:14 crc kubenswrapper[4675]: E0124 07:54:14.000807 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca\": container with ID starting with b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca not found: ID does not exist" containerID="b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca" Jan 24 07:54:14 crc kubenswrapper[4675]: I0124 07:54:14.000840 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca"} err="failed to get container status \"b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca\": rpc error: code = NotFound desc = could not find container \"b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca\": container with ID starting with b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca not found: ID does not exist" Jan 24 07:54:14 crc kubenswrapper[4675]: I0124 07:54:14.000863 4675 scope.go:117] "RemoveContainer" containerID="955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07" Jan 24 07:54:14 crc kubenswrapper[4675]: E0124 07:54:14.001416 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07\": container with ID starting with 955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07 not found: ID does not exist" containerID="955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07" Jan 24 07:54:14 crc kubenswrapper[4675]: I0124 07:54:14.001549 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07"} err="failed to get container status \"955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07\": rpc error: code = NotFound desc = could not find container \"955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07\": container with ID starting with 955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07 not found: ID does not exist" Jan 24 07:54:14 crc kubenswrapper[4675]: I0124 07:54:14.960911 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" path="/var/lib/kubelet/pods/6e6a9d98-2990-4c73-acc3-48d6623eb351/volumes" Jan 24 07:54:19 crc kubenswrapper[4675]: I0124 07:54:19.943539 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:54:19 crc kubenswrapper[4675]: E0124 07:54:19.946258 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:54:29 crc kubenswrapper[4675]: I0124 07:54:29.056756 4675 generic.go:334] "Generic (PLEG): container finished" podID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerID="50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588" exitCode=0 Jan 24 07:54:29 crc kubenswrapper[4675]: I0124 07:54:29.056856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" event={"ID":"179bf7a3-0095-4057-b946-ac2ee02c99ef","Type":"ContainerDied","Data":"50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588"} Jan 24 07:54:29 crc kubenswrapper[4675]: I0124 07:54:29.058166 4675 scope.go:117] "RemoveContainer" containerID="50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588" Jan 24 07:54:29 crc kubenswrapper[4675]: I0124 07:54:29.888469 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rk4k5_must-gather-w5wnw_179bf7a3-0095-4057-b946-ac2ee02c99ef/gather/0.log" Jan 24 07:54:34 crc kubenswrapper[4675]: I0124 07:54:34.943393 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:54:34 crc kubenswrapper[4675]: E0124 07:54:34.944609 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.607123 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6xnzj"] Jan 24 07:54:35 crc kubenswrapper[4675]: E0124 07:54:35.608043 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="registry-server" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.608098 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="registry-server" Jan 24 07:54:35 crc kubenswrapper[4675]: E0124 07:54:35.608130 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="extract-content" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.608138 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="extract-content" Jan 24 07:54:35 crc kubenswrapper[4675]: E0124 07:54:35.608162 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="extract-utilities" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.608175 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="extract-utilities" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.608407 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="registry-server" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.610056 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.626080 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xnzj"] Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.678004 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lbph\" (UniqueName: \"kubernetes.io/projected/34e0a03b-be29-4b7b-b0d8-773804190356-kube-api-access-7lbph\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.678059 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-catalog-content\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.678128 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-utilities\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.780245 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lbph\" (UniqueName: \"kubernetes.io/projected/34e0a03b-be29-4b7b-b0d8-773804190356-kube-api-access-7lbph\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.780306 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-catalog-content\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.780382 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-utilities\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.781032 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-utilities\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.781675 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-catalog-content\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.807541 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lbph\" (UniqueName: \"kubernetes.io/projected/34e0a03b-be29-4b7b-b0d8-773804190356-kube-api-access-7lbph\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.987167 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:36 crc kubenswrapper[4675]: I0124 07:54:36.510914 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xnzj"] Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.181171 4675 generic.go:334] "Generic (PLEG): container finished" podID="34e0a03b-be29-4b7b-b0d8-773804190356" containerID="93f0d9f4bb35da6e1fd2b25bbd519bb50099fc2619ced465d2acbc19ff05a683" exitCode=0 Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.181219 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerDied","Data":"93f0d9f4bb35da6e1fd2b25bbd519bb50099fc2619ced465d2acbc19ff05a683"} Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.181247 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerStarted","Data":"977f817e22bdb9bece05fa5fd4571fe1cfbc9c00d3d63bd4a3396e8eb4d8f179"} Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.460011 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rk4k5/must-gather-w5wnw"] Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.460751 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="copy" containerID="cri-o://846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2" gracePeriod=2 Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.477013 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rk4k5/must-gather-w5wnw"] Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.915827 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rk4k5_must-gather-w5wnw_179bf7a3-0095-4057-b946-ac2ee02c99ef/copy/0.log" Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.916567 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.022016 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4db4\" (UniqueName: \"kubernetes.io/projected/179bf7a3-0095-4057-b946-ac2ee02c99ef-kube-api-access-w4db4\") pod \"179bf7a3-0095-4057-b946-ac2ee02c99ef\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.022066 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/179bf7a3-0095-4057-b946-ac2ee02c99ef-must-gather-output\") pod \"179bf7a3-0095-4057-b946-ac2ee02c99ef\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.030894 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179bf7a3-0095-4057-b946-ac2ee02c99ef-kube-api-access-w4db4" (OuterVolumeSpecName: "kube-api-access-w4db4") pod "179bf7a3-0095-4057-b946-ac2ee02c99ef" (UID: "179bf7a3-0095-4057-b946-ac2ee02c99ef"). InnerVolumeSpecName "kube-api-access-w4db4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.125690 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4db4\" (UniqueName: \"kubernetes.io/projected/179bf7a3-0095-4057-b946-ac2ee02c99ef-kube-api-access-w4db4\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.173786 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/179bf7a3-0095-4057-b946-ac2ee02c99ef-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "179bf7a3-0095-4057-b946-ac2ee02c99ef" (UID: "179bf7a3-0095-4057-b946-ac2ee02c99ef"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.194057 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rk4k5_must-gather-w5wnw_179bf7a3-0095-4057-b946-ac2ee02c99ef/copy/0.log" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.194625 4675 generic.go:334] "Generic (PLEG): container finished" podID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerID="846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2" exitCode=143 Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.194703 4675 scope.go:117] "RemoveContainer" containerID="846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.194698 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.199168 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerStarted","Data":"064af4fe3768745387901f6ec0b6d2a528059b0fa4dd8ab153ae5b699ca3ee23"} Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.235418 4675 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/179bf7a3-0095-4057-b946-ac2ee02c99ef-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.241218 4675 scope.go:117] "RemoveContainer" containerID="50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.305255 4675 scope.go:117] "RemoveContainer" containerID="846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2" Jan 24 07:54:38 crc kubenswrapper[4675]: E0124 07:54:38.305943 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2\": container with ID starting with 846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2 not found: ID does not exist" containerID="846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.305978 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2"} err="failed to get container status \"846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2\": rpc error: code = NotFound desc = could not find container \"846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2\": container with ID starting with 846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2 not found: ID does not exist" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.305998 4675 scope.go:117] "RemoveContainer" containerID="50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588" Jan 24 07:54:38 crc kubenswrapper[4675]: E0124 07:54:38.306249 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588\": container with ID starting with 50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588 not found: ID does not exist" containerID="50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.306291 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588"} err="failed to get container status \"50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588\": rpc error: code = NotFound desc = could not find container \"50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588\": container with ID starting with 50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588 not found: ID does not exist" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.952350 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" path="/var/lib/kubelet/pods/179bf7a3-0095-4057-b946-ac2ee02c99ef/volumes" Jan 24 07:54:39 crc kubenswrapper[4675]: I0124 07:54:39.209474 4675 generic.go:334] "Generic (PLEG): container finished" podID="34e0a03b-be29-4b7b-b0d8-773804190356" containerID="064af4fe3768745387901f6ec0b6d2a528059b0fa4dd8ab153ae5b699ca3ee23" exitCode=0 Jan 24 07:54:39 crc kubenswrapper[4675]: I0124 07:54:39.209573 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerDied","Data":"064af4fe3768745387901f6ec0b6d2a528059b0fa4dd8ab153ae5b699ca3ee23"} Jan 24 07:54:40 crc kubenswrapper[4675]: I0124 07:54:40.226512 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerStarted","Data":"66a439ef77e64c693b436c8b316aed3dd27756e52377ab4994882d7e00101e58"} Jan 24 07:54:40 crc kubenswrapper[4675]: I0124 07:54:40.251493 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6xnzj" podStartSLOduration=2.798717325 podStartE2EDuration="5.251478723s" podCreationTimestamp="2026-01-24 07:54:35 +0000 UTC" firstStartedPulling="2026-01-24 07:54:37.184506963 +0000 UTC m=+3678.480612196" lastFinishedPulling="2026-01-24 07:54:39.637268361 +0000 UTC m=+3680.933373594" observedRunningTime="2026-01-24 07:54:40.241635133 +0000 UTC m=+3681.537740356" watchObservedRunningTime="2026-01-24 07:54:40.251478723 +0000 UTC m=+3681.547583946" Jan 24 07:54:45 crc kubenswrapper[4675]: I0124 07:54:45.987990 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:45 crc kubenswrapper[4675]: I0124 07:54:45.988819 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:46 crc kubenswrapper[4675]: I0124 07:54:46.071105 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:46 crc kubenswrapper[4675]: I0124 07:54:46.352228 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:46 crc kubenswrapper[4675]: I0124 07:54:46.416766 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xnzj"] Jan 24 07:54:48 crc kubenswrapper[4675]: I0124 07:54:48.310242 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6xnzj" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="registry-server" containerID="cri-o://66a439ef77e64c693b436c8b316aed3dd27756e52377ab4994882d7e00101e58" gracePeriod=2 Jan 24 07:54:48 crc kubenswrapper[4675]: I0124 07:54:48.948083 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:54:48 crc kubenswrapper[4675]: E0124 07:54:48.948822 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:54:49 crc kubenswrapper[4675]: I0124 07:54:49.322995 4675 generic.go:334] "Generic (PLEG): container finished" podID="34e0a03b-be29-4b7b-b0d8-773804190356" containerID="66a439ef77e64c693b436c8b316aed3dd27756e52377ab4994882d7e00101e58" exitCode=0 Jan 24 07:54:49 crc kubenswrapper[4675]: I0124 07:54:49.323049 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerDied","Data":"66a439ef77e64c693b436c8b316aed3dd27756e52377ab4994882d7e00101e58"} Jan 24 07:54:49 crc kubenswrapper[4675]: I0124 07:54:49.947411 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.102339 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-catalog-content\") pod \"34e0a03b-be29-4b7b-b0d8-773804190356\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.102591 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-utilities\") pod \"34e0a03b-be29-4b7b-b0d8-773804190356\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.102669 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lbph\" (UniqueName: \"kubernetes.io/projected/34e0a03b-be29-4b7b-b0d8-773804190356-kube-api-access-7lbph\") pod \"34e0a03b-be29-4b7b-b0d8-773804190356\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.104318 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-utilities" (OuterVolumeSpecName: "utilities") pod "34e0a03b-be29-4b7b-b0d8-773804190356" (UID: "34e0a03b-be29-4b7b-b0d8-773804190356"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.109071 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e0a03b-be29-4b7b-b0d8-773804190356-kube-api-access-7lbph" (OuterVolumeSpecName: "kube-api-access-7lbph") pod "34e0a03b-be29-4b7b-b0d8-773804190356" (UID: "34e0a03b-be29-4b7b-b0d8-773804190356"). InnerVolumeSpecName "kube-api-access-7lbph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.154714 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34e0a03b-be29-4b7b-b0d8-773804190356" (UID: "34e0a03b-be29-4b7b-b0d8-773804190356"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.205889 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.205933 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.205945 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lbph\" (UniqueName: \"kubernetes.io/projected/34e0a03b-be29-4b7b-b0d8-773804190356-kube-api-access-7lbph\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.334468 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerDied","Data":"977f817e22bdb9bece05fa5fd4571fe1cfbc9c00d3d63bd4a3396e8eb4d8f179"} Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.334532 4675 scope.go:117] "RemoveContainer" containerID="66a439ef77e64c693b436c8b316aed3dd27756e52377ab4994882d7e00101e58" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.335937 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.361052 4675 scope.go:117] "RemoveContainer" containerID="064af4fe3768745387901f6ec0b6d2a528059b0fa4dd8ab153ae5b699ca3ee23" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.398121 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xnzj"] Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.399298 4675 scope.go:117] "RemoveContainer" containerID="93f0d9f4bb35da6e1fd2b25bbd519bb50099fc2619ced465d2acbc19ff05a683" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.415821 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6xnzj"] Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.966543 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" path="/var/lib/kubelet/pods/34e0a03b-be29-4b7b-b0d8-773804190356/volumes" Jan 24 07:55:02 crc kubenswrapper[4675]: I0124 07:55:02.942648 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:55:02 crc kubenswrapper[4675]: E0124 07:55:02.945083 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:55:14 crc kubenswrapper[4675]: I0124 07:55:14.942832 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:55:14 crc kubenswrapper[4675]: E0124 07:55:14.943847 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:55:25 crc kubenswrapper[4675]: I0124 07:55:25.943110 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:55:25 crc kubenswrapper[4675]: E0124 07:55:25.945474 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:55:36 crc kubenswrapper[4675]: I0124 07:55:36.942465 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:55:36 crc kubenswrapper[4675]: E0124 07:55:36.943380 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:55:42 crc kubenswrapper[4675]: I0124 07:55:42.364245 4675 scope.go:117] "RemoveContainer" containerID="c858779e5ecc8a74de00c097b5497a5ef52c52bc32dec8633fa758d97f5458dd" Jan 24 07:55:48 crc kubenswrapper[4675]: I0124 07:55:48.949677 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:55:48 crc kubenswrapper[4675]: E0124 07:55:48.950473 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:56:00 crc kubenswrapper[4675]: I0124 07:56:00.944331 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:56:00 crc kubenswrapper[4675]: E0124 07:56:00.945152 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:56:14 crc kubenswrapper[4675]: I0124 07:56:14.943779 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:56:14 crc kubenswrapper[4675]: E0124 07:56:14.946357 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:56:27 crc kubenswrapper[4675]: I0124 07:56:27.942332 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:56:27 crc kubenswrapper[4675]: E0124 07:56:27.943644 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:56:39 crc kubenswrapper[4675]: I0124 07:56:39.943432 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:56:39 crc kubenswrapper[4675]: E0124 07:56:39.944246 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:56:54 crc kubenswrapper[4675]: I0124 07:56:54.942927 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:56:54 crc kubenswrapper[4675]: E0124 07:56:54.944524 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:57:07 crc kubenswrapper[4675]: I0124 07:57:07.942613 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:57:07 crc kubenswrapper[4675]: E0124 07:57:07.943370 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:57:21 crc kubenswrapper[4675]: I0124 07:57:21.943525 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:57:21 crc kubenswrapper[4675]: E0124 07:57:21.944682 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.194042 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-958tl/must-gather-64vd7"] Jan 24 07:57:29 crc kubenswrapper[4675]: E0124 07:57:29.194778 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="gather" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.194789 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="gather" Jan 24 07:57:29 crc kubenswrapper[4675]: E0124 07:57:29.194803 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="extract-content" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.194809 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="extract-content" Jan 24 07:57:29 crc kubenswrapper[4675]: E0124 07:57:29.194821 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="extract-utilities" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.194826 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="extract-utilities" Jan 24 07:57:29 crc kubenswrapper[4675]: E0124 07:57:29.194843 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="copy" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.194848 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="copy" Jan 24 07:57:29 crc kubenswrapper[4675]: E0124 07:57:29.194862 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="registry-server" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.194867 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="registry-server" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.195057 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="gather" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.195078 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="registry-server" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.195088 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="copy" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.196042 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.219678 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-958tl"/"default-dockercfg-zclxw" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.219588 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-958tl"/"openshift-service-ca.crt" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.220379 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-958tl"/"kube-root-ca.crt" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.223234 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-958tl/must-gather-64vd7"] Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.300068 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zghsj\" (UniqueName: \"kubernetes.io/projected/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-kube-api-access-zghsj\") pod \"must-gather-64vd7\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.300255 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-must-gather-output\") pod \"must-gather-64vd7\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.402295 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-must-gather-output\") pod \"must-gather-64vd7\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.402389 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zghsj\" (UniqueName: \"kubernetes.io/projected/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-kube-api-access-zghsj\") pod \"must-gather-64vd7\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.402956 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-must-gather-output\") pod \"must-gather-64vd7\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.426425 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zghsj\" (UniqueName: \"kubernetes.io/projected/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-kube-api-access-zghsj\") pod \"must-gather-64vd7\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.514144 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.980681 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-958tl/must-gather-64vd7"] Jan 24 07:57:30 crc kubenswrapper[4675]: I0124 07:57:30.938405 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/must-gather-64vd7" event={"ID":"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5","Type":"ContainerStarted","Data":"c39dbccf2355a3e98d8ea1c8895d61f5a0774063cc91871a2a28f48106ca8401"} Jan 24 07:57:30 crc kubenswrapper[4675]: I0124 07:57:30.938982 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/must-gather-64vd7" event={"ID":"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5","Type":"ContainerStarted","Data":"311c1a82bd98e8c72527e16eeaa3da0e561f2709ffd2315fbe82f41ebd0fd526"} Jan 24 07:57:30 crc kubenswrapper[4675]: I0124 07:57:30.939000 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/must-gather-64vd7" event={"ID":"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5","Type":"ContainerStarted","Data":"96c651293ca2642071e0b3b14c3bf9d16ad024316ab3b9e09127e018f222008d"} Jan 24 07:57:30 crc kubenswrapper[4675]: I0124 07:57:30.960657 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-958tl/must-gather-64vd7" podStartSLOduration=1.96064096 podStartE2EDuration="1.96064096s" podCreationTimestamp="2026-01-24 07:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:57:30.954058291 +0000 UTC m=+3852.250163524" watchObservedRunningTime="2026-01-24 07:57:30.96064096 +0000 UTC m=+3852.256746193" Jan 24 07:57:32 crc kubenswrapper[4675]: I0124 07:57:32.943097 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:57:32 crc kubenswrapper[4675]: E0124 07:57:32.943691 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.344504 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-958tl/crc-debug-6h9v2"] Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.347066 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.402751 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sq76\" (UniqueName: \"kubernetes.io/projected/03d5f217-42b1-4355-badd-256b5fa8709d-kube-api-access-6sq76\") pod \"crc-debug-6h9v2\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.402809 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03d5f217-42b1-4355-badd-256b5fa8709d-host\") pod \"crc-debug-6h9v2\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.504586 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sq76\" (UniqueName: \"kubernetes.io/projected/03d5f217-42b1-4355-badd-256b5fa8709d-kube-api-access-6sq76\") pod \"crc-debug-6h9v2\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.504634 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03d5f217-42b1-4355-badd-256b5fa8709d-host\") pod \"crc-debug-6h9v2\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.504883 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03d5f217-42b1-4355-badd-256b5fa8709d-host\") pod \"crc-debug-6h9v2\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.526576 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sq76\" (UniqueName: \"kubernetes.io/projected/03d5f217-42b1-4355-badd-256b5fa8709d-kube-api-access-6sq76\") pod \"crc-debug-6h9v2\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.666096 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.980122 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-6h9v2" event={"ID":"03d5f217-42b1-4355-badd-256b5fa8709d","Type":"ContainerStarted","Data":"24b7020c305a4ecc19f86c8c4874d01aef9f91367091de56d516f83c37e8dff9"} Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.980406 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-6h9v2" event={"ID":"03d5f217-42b1-4355-badd-256b5fa8709d","Type":"ContainerStarted","Data":"8855ffa6c371d306ba09e8495be65eba6dff1385e9c0091905f8b04c7dfd483c"} Jan 24 07:57:43 crc kubenswrapper[4675]: I0124 07:57:43.942459 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:57:43 crc kubenswrapper[4675]: E0124 07:57:43.943133 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:57:56 crc kubenswrapper[4675]: I0124 07:57:56.943130 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:57:56 crc kubenswrapper[4675]: E0124 07:57:56.944014 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:58:08 crc kubenswrapper[4675]: I0124 07:58:08.247410 4675 generic.go:334] "Generic (PLEG): container finished" podID="03d5f217-42b1-4355-badd-256b5fa8709d" containerID="24b7020c305a4ecc19f86c8c4874d01aef9f91367091de56d516f83c37e8dff9" exitCode=0 Jan 24 07:58:08 crc kubenswrapper[4675]: I0124 07:58:08.247483 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-6h9v2" event={"ID":"03d5f217-42b1-4355-badd-256b5fa8709d","Type":"ContainerDied","Data":"24b7020c305a4ecc19f86c8c4874d01aef9f91367091de56d516f83c37e8dff9"} Jan 24 07:58:08 crc kubenswrapper[4675]: I0124 07:58:08.952028 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:58:08 crc kubenswrapper[4675]: E0124 07:58:08.952413 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.383534 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.424645 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-958tl/crc-debug-6h9v2"] Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.436919 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-958tl/crc-debug-6h9v2"] Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.513411 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sq76\" (UniqueName: \"kubernetes.io/projected/03d5f217-42b1-4355-badd-256b5fa8709d-kube-api-access-6sq76\") pod \"03d5f217-42b1-4355-badd-256b5fa8709d\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.513855 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03d5f217-42b1-4355-badd-256b5fa8709d-host\") pod \"03d5f217-42b1-4355-badd-256b5fa8709d\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.514066 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03d5f217-42b1-4355-badd-256b5fa8709d-host" (OuterVolumeSpecName: "host") pod "03d5f217-42b1-4355-badd-256b5fa8709d" (UID: "03d5f217-42b1-4355-badd-256b5fa8709d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.515553 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03d5f217-42b1-4355-badd-256b5fa8709d-host\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.525087 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d5f217-42b1-4355-badd-256b5fa8709d-kube-api-access-6sq76" (OuterVolumeSpecName: "kube-api-access-6sq76") pod "03d5f217-42b1-4355-badd-256b5fa8709d" (UID: "03d5f217-42b1-4355-badd-256b5fa8709d"). InnerVolumeSpecName "kube-api-access-6sq76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.617535 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sq76\" (UniqueName: \"kubernetes.io/projected/03d5f217-42b1-4355-badd-256b5fa8709d-kube-api-access-6sq76\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.266037 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8855ffa6c371d306ba09e8495be65eba6dff1385e9c0091905f8b04c7dfd483c" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.266096 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.689565 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-958tl/crc-debug-cthbx"] Jan 24 07:58:10 crc kubenswrapper[4675]: E0124 07:58:10.690149 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d5f217-42b1-4355-badd-256b5fa8709d" containerName="container-00" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.690161 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d5f217-42b1-4355-badd-256b5fa8709d" containerName="container-00" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.690328 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d5f217-42b1-4355-badd-256b5fa8709d" containerName="container-00" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.691065 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.838939 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-host\") pod \"crc-debug-cthbx\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.839099 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvt8n\" (UniqueName: \"kubernetes.io/projected/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-kube-api-access-bvt8n\") pod \"crc-debug-cthbx\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.940332 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvt8n\" (UniqueName: \"kubernetes.io/projected/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-kube-api-access-bvt8n\") pod \"crc-debug-cthbx\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.940455 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-host\") pod \"crc-debug-cthbx\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.940603 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-host\") pod \"crc-debug-cthbx\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.952080 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d5f217-42b1-4355-badd-256b5fa8709d" path="/var/lib/kubelet/pods/03d5f217-42b1-4355-badd-256b5fa8709d/volumes" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.962358 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvt8n\" (UniqueName: \"kubernetes.io/projected/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-kube-api-access-bvt8n\") pod \"crc-debug-cthbx\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:11 crc kubenswrapper[4675]: I0124 07:58:11.006107 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:11 crc kubenswrapper[4675]: I0124 07:58:11.276774 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-cthbx" event={"ID":"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3","Type":"ContainerStarted","Data":"f73acc78d209e642db0475021e83c828f28a3e3fcb9f35022e1d491b3eba45ef"} Jan 24 07:58:11 crc kubenswrapper[4675]: I0124 07:58:11.277114 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-cthbx" event={"ID":"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3","Type":"ContainerStarted","Data":"6432092f2a679fd401f62b355afa6cd2e1a6614df1498db7e44271bbdc1b5fea"} Jan 24 07:58:11 crc kubenswrapper[4675]: I0124 07:58:11.294560 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-958tl/crc-debug-cthbx" podStartSLOduration=1.294543278 podStartE2EDuration="1.294543278s" podCreationTimestamp="2026-01-24 07:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:58:11.29211608 +0000 UTC m=+3892.588221313" watchObservedRunningTime="2026-01-24 07:58:11.294543278 +0000 UTC m=+3892.590648501" Jan 24 07:58:12 crc kubenswrapper[4675]: I0124 07:58:12.286474 4675 generic.go:334] "Generic (PLEG): container finished" podID="0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" containerID="f73acc78d209e642db0475021e83c828f28a3e3fcb9f35022e1d491b3eba45ef" exitCode=0 Jan 24 07:58:12 crc kubenswrapper[4675]: I0124 07:58:12.286521 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-cthbx" event={"ID":"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3","Type":"ContainerDied","Data":"f73acc78d209e642db0475021e83c828f28a3e3fcb9f35022e1d491b3eba45ef"} Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.396677 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.433531 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-958tl/crc-debug-cthbx"] Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.442851 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-958tl/crc-debug-cthbx"] Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.526226 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-host\") pod \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.526363 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-host" (OuterVolumeSpecName: "host") pod "0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" (UID: "0c6e9f07-4c1a-4a2c-912d-0880df1c82f3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.526411 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvt8n\" (UniqueName: \"kubernetes.io/projected/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-kube-api-access-bvt8n\") pod \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.527123 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-host\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.535780 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-kube-api-access-bvt8n" (OuterVolumeSpecName: "kube-api-access-bvt8n") pod "0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" (UID: "0c6e9f07-4c1a-4a2c-912d-0880df1c82f3"). InnerVolumeSpecName "kube-api-access-bvt8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.628499 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvt8n\" (UniqueName: \"kubernetes.io/projected/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-kube-api-access-bvt8n\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.302086 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6432092f2a679fd401f62b355afa6cd2e1a6614df1498db7e44271bbdc1b5fea" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.302149 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.639251 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-958tl/crc-debug-p4l6v"] Jan 24 07:58:14 crc kubenswrapper[4675]: E0124 07:58:14.639938 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" containerName="container-00" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.639972 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" containerName="container-00" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.640241 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" containerName="container-00" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.640814 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.746731 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5404a634-005c-49e9-b722-a5d2c1c1c0eb-host\") pod \"crc-debug-p4l6v\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.746933 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx4px\" (UniqueName: \"kubernetes.io/projected/5404a634-005c-49e9-b722-a5d2c1c1c0eb-kube-api-access-qx4px\") pod \"crc-debug-p4l6v\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.849352 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5404a634-005c-49e9-b722-a5d2c1c1c0eb-host\") pod \"crc-debug-p4l6v\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.849435 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx4px\" (UniqueName: \"kubernetes.io/projected/5404a634-005c-49e9-b722-a5d2c1c1c0eb-kube-api-access-qx4px\") pod \"crc-debug-p4l6v\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.849858 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5404a634-005c-49e9-b722-a5d2c1c1c0eb-host\") pod \"crc-debug-p4l6v\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.869652 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx4px\" (UniqueName: \"kubernetes.io/projected/5404a634-005c-49e9-b722-a5d2c1c1c0eb-kube-api-access-qx4px\") pod \"crc-debug-p4l6v\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.955552 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" path="/var/lib/kubelet/pods/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3/volumes" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.956112 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: W0124 07:58:14.991592 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5404a634_005c_49e9_b722_a5d2c1c1c0eb.slice/crio-a77ff9df7d8443d9eab7d1f75a0fd52f4343388da59f218dd43ef4223e7b0c1d WatchSource:0}: Error finding container a77ff9df7d8443d9eab7d1f75a0fd52f4343388da59f218dd43ef4223e7b0c1d: Status 404 returned error can't find the container with id a77ff9df7d8443d9eab7d1f75a0fd52f4343388da59f218dd43ef4223e7b0c1d Jan 24 07:58:15 crc kubenswrapper[4675]: I0124 07:58:15.340039 4675 generic.go:334] "Generic (PLEG): container finished" podID="5404a634-005c-49e9-b722-a5d2c1c1c0eb" containerID="e930f56fcc5e1dfe4baa09ef2316036cdb2d6f0adc335e5d2db35065858833aa" exitCode=0 Jan 24 07:58:15 crc kubenswrapper[4675]: I0124 07:58:15.340210 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-p4l6v" event={"ID":"5404a634-005c-49e9-b722-a5d2c1c1c0eb","Type":"ContainerDied","Data":"e930f56fcc5e1dfe4baa09ef2316036cdb2d6f0adc335e5d2db35065858833aa"} Jan 24 07:58:15 crc kubenswrapper[4675]: I0124 07:58:15.340325 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-p4l6v" event={"ID":"5404a634-005c-49e9-b722-a5d2c1c1c0eb","Type":"ContainerStarted","Data":"a77ff9df7d8443d9eab7d1f75a0fd52f4343388da59f218dd43ef4223e7b0c1d"} Jan 24 07:58:15 crc kubenswrapper[4675]: I0124 07:58:15.392566 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-958tl/crc-debug-p4l6v"] Jan 24 07:58:15 crc kubenswrapper[4675]: I0124 07:58:15.401800 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-958tl/crc-debug-p4l6v"] Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.454119 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.586402 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx4px\" (UniqueName: \"kubernetes.io/projected/5404a634-005c-49e9-b722-a5d2c1c1c0eb-kube-api-access-qx4px\") pod \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.587905 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5404a634-005c-49e9-b722-a5d2c1c1c0eb-host\") pod \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.588193 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5404a634-005c-49e9-b722-a5d2c1c1c0eb-host" (OuterVolumeSpecName: "host") pod "5404a634-005c-49e9-b722-a5d2c1c1c0eb" (UID: "5404a634-005c-49e9-b722-a5d2c1c1c0eb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.588847 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5404a634-005c-49e9-b722-a5d2c1c1c0eb-host\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.602009 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5404a634-005c-49e9-b722-a5d2c1c1c0eb-kube-api-access-qx4px" (OuterVolumeSpecName: "kube-api-access-qx4px") pod "5404a634-005c-49e9-b722-a5d2c1c1c0eb" (UID: "5404a634-005c-49e9-b722-a5d2c1c1c0eb"). InnerVolumeSpecName "kube-api-access-qx4px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.690469 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx4px\" (UniqueName: \"kubernetes.io/projected/5404a634-005c-49e9-b722-a5d2c1c1c0eb-kube-api-access-qx4px\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.952895 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5404a634-005c-49e9-b722-a5d2c1c1c0eb" path="/var/lib/kubelet/pods/5404a634-005c-49e9-b722-a5d2c1c1c0eb/volumes" Jan 24 07:58:17 crc kubenswrapper[4675]: I0124 07:58:17.356985 4675 scope.go:117] "RemoveContainer" containerID="e930f56fcc5e1dfe4baa09ef2316036cdb2d6f0adc335e5d2db35065858833aa" Jan 24 07:58:17 crc kubenswrapper[4675]: I0124 07:58:17.357098 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:22 crc kubenswrapper[4675]: I0124 07:58:22.949144 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:58:22 crc kubenswrapper[4675]: E0124 07:58:22.949911 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:58:34 crc kubenswrapper[4675]: I0124 07:58:34.943356 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:58:34 crc kubenswrapper[4675]: E0124 07:58:34.944198 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:58:48 crc kubenswrapper[4675]: I0124 07:58:48.947851 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:58:49 crc kubenswrapper[4675]: I0124 07:58:49.689954 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"be0174938bf27d2086139a9eb48453c049038be5f8027d938c36c6164eb025e0"} Jan 24 07:59:03 crc kubenswrapper[4675]: I0124 07:59:03.060010 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79656b6bf8-nwng8_17e03478-4656-43f8-8d7b-5dfb1ff160a1/barbican-api/0.log" Jan 24 07:59:03 crc kubenswrapper[4675]: I0124 07:59:03.266613 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79656b6bf8-nwng8_17e03478-4656-43f8-8d7b-5dfb1ff160a1/barbican-api-log/0.log" Jan 24 07:59:03 crc kubenswrapper[4675]: I0124 07:59:03.348693 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67c5df6588-xqvmq_3c5d104c-9f26-49fd-bec5-f62a53503d42/barbican-keystone-listener/0.log" Jan 24 07:59:03 crc kubenswrapper[4675]: I0124 07:59:03.449124 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67c5df6588-xqvmq_3c5d104c-9f26-49fd-bec5-f62a53503d42/barbican-keystone-listener-log/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.105671 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9646bdbd7-ww6xm_be4ebeb1-6268-4363-948f-8f9aa8f61fe9/barbican-worker/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.142872 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9646bdbd7-ww6xm_be4ebeb1-6268-4363-948f-8f9aa8f61fe9/barbican-worker-log/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.421023 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/ceilometer-central-agent/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.427187 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw_e9b8f08b-6ece-4b46-86c0-9c353d61c50c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.517243 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/ceilometer-notification-agent/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.672066 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/sg-core/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.728854 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/proxy-httpd/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.890300 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f870976e-13a5-4226-9eff-18a3244582e8/cinder-api/0.log" Jan 24 07:59:05 crc kubenswrapper[4675]: I0124 07:59:05.010282 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f870976e-13a5-4226-9eff-18a3244582e8/cinder-api-log/0.log" Jan 24 07:59:05 crc kubenswrapper[4675]: I0124 07:59:05.202624 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_31cacad0-4d32-4300-8bdc-bbf15fcd77ac/cinder-scheduler/0.log" Jan 24 07:59:05 crc kubenswrapper[4675]: I0124 07:59:05.245000 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_31cacad0-4d32-4300-8bdc-bbf15fcd77ac/probe/0.log" Jan 24 07:59:05 crc kubenswrapper[4675]: I0124 07:59:05.464995 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-td879_bc52fac9-92d8-4555-b942-5f0dcb4bf6f3/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:05 crc kubenswrapper[4675]: I0124 07:59:05.556860 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm_eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:05 crc kubenswrapper[4675]: I0124 07:59:05.835805 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-4qjxm_4a4ca579-5173-42d0-8dd8-d287df832c44/init/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.033739 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-4qjxm_4a4ca579-5173-42d0-8dd8-d287df832c44/init/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.433244 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-4qjxm_4a4ca579-5173-42d0-8dd8-d287df832c44/dnsmasq-dns/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.463475 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-49lhh_09d123a4-63c4-4269-b4e1-12932baedfd0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.709857 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d0a8fdf4-03fc-4962-8792-6f129d2b00e4/glance-httpd/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.740049 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d0a8fdf4-03fc-4962-8792-6f129d2b00e4/glance-log/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.987683 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d61eafc8-f960-4335-8d26-2d47e8c7c039/glance-httpd/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.991708 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d61eafc8-f960-4335-8d26-2d47e8c7c039/glance-log/0.log" Jan 24 07:59:07 crc kubenswrapper[4675]: I0124 07:59:07.298580 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-656ff794dd-jx8ld_4b7e7730-0a42-48b0-bb7e-da95eb915126/horizon/1.log" Jan 24 07:59:07 crc kubenswrapper[4675]: I0124 07:59:07.457601 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-656ff794dd-jx8ld_4b7e7730-0a42-48b0-bb7e-da95eb915126/horizon/0.log" Jan 24 07:59:07 crc kubenswrapper[4675]: I0124 07:59:07.646277 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh_2d09456f-a230-420b-b288-c0dc3e8a6e22/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:07 crc kubenswrapper[4675]: I0124 07:59:07.648017 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-656ff794dd-jx8ld_4b7e7730-0a42-48b0-bb7e-da95eb915126/horizon-log/0.log" Jan 24 07:59:07 crc kubenswrapper[4675]: I0124 07:59:07.890527 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vbvgv_27ad7637-701b-43e1-8440-0fd32522fc56/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.073106 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5dbffd67c8-k8gzb_405f0f26-61a4-4420-a147-43d7b86ebb8e/keystone-api/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.154036 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_b742b344-80ea-48bf-bd28-8f1be00b4442/kube-state-metrics/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.306256 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq_d457c71e-ef41-4bf9-a59b-b3221df26b41/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.756022 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77c5f475df-4zndh_4dd8da22-c828-48e1-bbab-d7360beb8d9f/neutron-httpd/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.763478 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77c5f475df-4zndh_4dd8da22-c828-48e1-bbab-d7360beb8d9f/neutron-api/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.838662 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b2446e52-3d97-46f2-ac99-4bb1af82d302/memcached/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.854406 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g_388e10c7-15e4-40d5-94ed-5c6612f7fbfe/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:09 crc kubenswrapper[4675]: I0124 07:59:09.163836 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_95a0d5c5-541f-4a43-9d20-22264dca21d1/nova-api-log/0.log" Jan 24 07:59:09 crc kubenswrapper[4675]: I0124 07:59:09.262513 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_a3a43606-cba1-4fca-93c4-a1937ee449cc/nova-cell0-conductor-conductor/0.log" Jan 24 07:59:09 crc kubenswrapper[4675]: I0124 07:59:09.455890 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_95a0d5c5-541f-4a43-9d20-22264dca21d1/nova-api-api/0.log" Jan 24 07:59:09 crc kubenswrapper[4675]: I0124 07:59:09.792941 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8afe3d83-5678-47e9-be7d-dfbf50fa5bc9/nova-cell1-conductor-conductor/0.log" Jan 24 07:59:09 crc kubenswrapper[4675]: I0124 07:59:09.896887 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-k8fng_f4024f70-df50-442c-bcd5-c599d978277c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:09 crc kubenswrapper[4675]: I0124 07:59:09.932766 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a485ae65-6b4d-4cc6-9623-dc0b722f47e8/nova-cell1-novncproxy-novncproxy/0.log" Jan 24 07:59:10 crc kubenswrapper[4675]: I0124 07:59:10.118118 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d55e1385-c016-4bb9-afc2-a070f5a88241/nova-metadata-log/0.log" Jan 24 07:59:10 crc kubenswrapper[4675]: I0124 07:59:10.368116 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_361b5d16-2808-40ad-88a0-f07fd4c33e3e/nova-scheduler-scheduler/0.log" Jan 24 07:59:10 crc kubenswrapper[4675]: I0124 07:59:10.377610 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e189b411-9dd6-496f-a001-41bc90c3fe00/mysql-bootstrap/0.log" Jan 24 07:59:10 crc kubenswrapper[4675]: I0124 07:59:10.936908 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d55e1385-c016-4bb9-afc2-a070f5a88241/nova-metadata-metadata/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.075037 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e189b411-9dd6-496f-a001-41bc90c3fe00/galera/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.095764 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e189b411-9dd6-496f-a001-41bc90c3fe00/mysql-bootstrap/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.118508 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_009254f3-9d76-4d89-8e35-d2b4c4be0da8/mysql-bootstrap/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.361009 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_009254f3-9d76-4d89-8e35-d2b4c4be0da8/galera/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.375971 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_009254f3-9d76-4d89-8e35-d2b4c4be0da8/mysql-bootstrap/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.377465 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb/openstackclient/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.574755 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2x2kb_b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1/ovn-controller/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.597386 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b7pft_1e0062ff-7e89-4c55-8796-de1c9e311dd2/openstack-network-exporter/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.678990 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovsdb-server-init/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.869989 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovs-vswitchd/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.966259 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovsdb-server-init/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.998075 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovsdb-server/0.log" Jan 24 07:59:12 crc kubenswrapper[4675]: I0124 07:59:12.034267 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-7vbln_3e407880-d27a-4aa2-bb81-a87bb20ffcf1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:12 crc kubenswrapper[4675]: I0124 07:59:12.152682 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_daf62505-a3ad-4c12-a520-4d412d26a71c/openstack-network-exporter/0.log" Jan 24 07:59:12 crc kubenswrapper[4675]: I0124 07:59:12.887741 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_daf62505-a3ad-4c12-a520-4d412d26a71c/ovn-northd/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.013779 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_19fa54da-8a94-427d-b8c6-0881657d3324/openstack-network-exporter/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.044012 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_19fa54da-8a94-427d-b8c6-0881657d3324/ovsdbserver-nb/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.151034 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1d973fa-2671-49fe-82f1-1862aa70d784/openstack-network-exporter/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.216279 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1d973fa-2671-49fe-82f1-1862aa70d784/ovsdbserver-sb/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.339275 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c48f89996-b4jz4_bf1f40fb-34b7-494b-bed1-b851a073ac8c/placement-api/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.380032 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c48f89996-b4jz4_bf1f40fb-34b7-494b-bed1-b851a073ac8c/placement-log/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.506773 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7c146e5e-4709-4401-a5eb-522609573260/setup-container/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.713482 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7c146e5e-4709-4401-a5eb-522609573260/rabbitmq/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.722975 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7c146e5e-4709-4401-a5eb-522609573260/setup-container/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.736678 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3fd85775-321f-4647-95b6-773ec82811e0/setup-container/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.978082 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3fd85775-321f-4647-95b6-773ec82811e0/rabbitmq/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.051429 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw_7b1b0570-d3a2-4029-bcf8-f41144ea0f06/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.058586 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3fd85775-321f-4647-95b6-773ec82811e0/setup-container/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.228923 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zd8ln_55150857-7da2-4609-84be-9cbaa28141ed/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.289160 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q_774fb762-6506-4e0c-9732-9208f7802057/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.376584 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ln8x2_3bc4008d-f8c6-4745-b524-d6136632cbfb/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.553415 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wq6r9_191f15b0-8a3b-4dc4-bc49-9003c61619bf/ssh-known-hosts-edpm-deployment/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.680438 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5875964765-b68mp_fa1443f8-8586-4757-9637-378c7c88787d/proxy-server/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.717228 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5875964765-b68mp_fa1443f8-8586-4757-9637-378c7c88787d/proxy-httpd/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.823005 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-sz46b_57da3a87-eeeb-47c8-b1bd-6a160dd81ff8/swift-ring-rebalance/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.934623 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-reaper/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.966847 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-auditor/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.085799 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-server/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.101942 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-replicator/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.181043 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-auditor/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.222858 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-replicator/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.249305 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-server/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.341944 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-updater/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.383096 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-auditor/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.386840 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-expirer/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.461433 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-replicator/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.594021 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/rsync/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.628992 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-server/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.632518 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-updater/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.640455 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/swift-recon-cron/0.log" Jan 24 07:59:16 crc kubenswrapper[4675]: I0124 07:59:16.063790 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0e021dd7-397f-4546-a38b-c8c13a1c830d/tempest-tests-tempest-tests-runner/0.log" Jan 24 07:59:16 crc kubenswrapper[4675]: I0124 07:59:16.208009 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx_e47d7738-3361-429e-90f9-02dee4f0052e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:16 crc kubenswrapper[4675]: I0124 07:59:16.265469 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6051ab9a-5c43-4757-a1ff-3f199dee0a79/test-operator-logs-container/0.log" Jan 24 07:59:16 crc kubenswrapper[4675]: I0124 07:59:16.455110 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc_e9c128cc-910c-4ef2-9b56-14adf4d264b3/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:44 crc kubenswrapper[4675]: I0124 07:59:44.187712 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-dwbq6_2db25911-f36e-43ae-8f47-b042ec82266e/manager/0.log" Jan 24 07:59:44 crc kubenswrapper[4675]: I0124 07:59:44.733861 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/util/0.log" Jan 24 07:59:44 crc kubenswrapper[4675]: I0124 07:59:44.945528 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/pull/0.log" Jan 24 07:59:44 crc kubenswrapper[4675]: I0124 07:59:44.967541 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/util/0.log" Jan 24 07:59:44 crc kubenswrapper[4675]: I0124 07:59:44.984077 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/pull/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.153469 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/util/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.189168 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/pull/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.230093 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/extract/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.375888 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-6jbwg_b8285f65-9930-4bb9-9e18-b6ffe19f45fb/manager/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.447827 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-79fwx_6003a1f9-ad0e-49f6-8750-6ac2208560cc/manager/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.668745 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-thqtz_e7263d16-14c3-4254-821a-cbf99b7cf3e4/manager/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.731838 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-mqk98_7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320/manager/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.876850 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-67vkh_4aa5aa88-c6f2-4000-9a9d-3b14e23220de/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.104273 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-l7jq5_06f423e8-7ba9-497d-a587-cc880d66625b/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.173411 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-c5658_743af71f-3542-439c-b3a1-33a7b9ae34f1/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.329477 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-bqd4q_5b3a45f7-a1eb-44a2-b0be-7c77b190d50c/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.355981 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-6lq96_e09ce8a8-a2a4-4fec-b36d-a97910aced0f/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.553035 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-vjf84_7660e41e-527d-4806-8ef3-6dee25fa72c5/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.767789 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-dzvlp_724ac56d-9f4e-40f9-98f7-3a65c807f89c/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.873013 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-4lmvf_6f867475-7eee-431c-97ee-12ae861193c7/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.987819 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-q6qn9_bdc167a3-9335-4b3d-9696-a1d03b9ae618/manager/0.log" Jan 24 07:59:47 crc kubenswrapper[4675]: I0124 07:59:47.103917 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk_ac97fbc7-211e-41e3-8e16-aff853a7c9f4/manager/0.log" Jan 24 07:59:47 crc kubenswrapper[4675]: I0124 07:59:47.229638 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-d498c57f9-4vbdv_fc267189-e8ca-412c-bb9a-6b251571a514/operator/0.log" Jan 24 07:59:47 crc kubenswrapper[4675]: I0124 07:59:47.551515 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-d4hsh_954076ba-3e6f-4e5b-9b3f-4637840d5021/registry-server/0.log" Jan 24 07:59:47 crc kubenswrapper[4675]: I0124 07:59:47.809659 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-n4kll_a1041f21-5d7d-4b17-84ff-ee83332e604d/manager/0.log" Jan 24 07:59:47 crc kubenswrapper[4675]: I0124 07:59:47.924767 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-l5hrz_20b0ee18-4569-4428-956f-d8795904f368/manager/0.log" Jan 24 07:59:48 crc kubenswrapper[4675]: I0124 07:59:48.130744 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9cmpf_b7d1f492-700c-492e-a1c2-eae496f0133c/operator/0.log" Jan 24 07:59:48 crc kubenswrapper[4675]: I0124 07:59:48.267329 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7d55b89685-9rvmf_4bfb9011-058d-494d-96ce-a39202c7b851/manager/0.log" Jan 24 07:59:48 crc kubenswrapper[4675]: I0124 07:59:48.394512 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-688fccdd58-dkxf7_d94b056e-c445-4033-8d02-a794dae4b671/manager/0.log" Jan 24 07:59:48 crc kubenswrapper[4675]: I0124 07:59:48.507665 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-n6jmw_47e89f8e-f652-43a1-a36a-2db184700f3e/manager/0.log" Jan 24 07:59:48 crc kubenswrapper[4675]: I0124 07:59:48.563770 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-k7crk_fae349a1-6c08-4424-abe2-42dddccd55cc/manager/0.log" Jan 24 07:59:48 crc kubenswrapper[4675]: I0124 07:59:48.644245 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-9fkjr_f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480/manager/0.log" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.241639 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g"] Jan 24 08:00:00 crc kubenswrapper[4675]: E0124 08:00:00.243223 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5404a634-005c-49e9-b722-a5d2c1c1c0eb" containerName="container-00" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.243252 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5404a634-005c-49e9-b722-a5d2c1c1c0eb" containerName="container-00" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.243767 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5404a634-005c-49e9-b722-a5d2c1c1c0eb" containerName="container-00" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.245089 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.247863 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.248921 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.259345 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g"] Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.355783 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eab032c-1f35-4afc-9150-acfc2580e18c-config-volume\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.355862 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktqw7\" (UniqueName: \"kubernetes.io/projected/8eab032c-1f35-4afc-9150-acfc2580e18c-kube-api-access-ktqw7\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.355910 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eab032c-1f35-4afc-9150-acfc2580e18c-secret-volume\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.458217 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eab032c-1f35-4afc-9150-acfc2580e18c-config-volume\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.459668 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktqw7\" (UniqueName: \"kubernetes.io/projected/8eab032c-1f35-4afc-9150-acfc2580e18c-kube-api-access-ktqw7\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.460565 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eab032c-1f35-4afc-9150-acfc2580e18c-secret-volume\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.459612 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eab032c-1f35-4afc-9150-acfc2580e18c-config-volume\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.466696 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eab032c-1f35-4afc-9150-acfc2580e18c-secret-volume\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.486510 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktqw7\" (UniqueName: \"kubernetes.io/projected/8eab032c-1f35-4afc-9150-acfc2580e18c-kube-api-access-ktqw7\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.583225 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:01 crc kubenswrapper[4675]: I0124 08:00:01.060591 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g"] Jan 24 08:00:01 crc kubenswrapper[4675]: W0124 08:00:01.076592 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eab032c_1f35_4afc_9150_acfc2580e18c.slice/crio-9698a28ed3ee898c418f1b2472ef6ed29685f6c54ca4cf70d8c46377f273cdc4 WatchSource:0}: Error finding container 9698a28ed3ee898c418f1b2472ef6ed29685f6c54ca4cf70d8c46377f273cdc4: Status 404 returned error can't find the container with id 9698a28ed3ee898c418f1b2472ef6ed29685f6c54ca4cf70d8c46377f273cdc4 Jan 24 08:00:01 crc kubenswrapper[4675]: I0124 08:00:01.393848 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" event={"ID":"8eab032c-1f35-4afc-9150-acfc2580e18c","Type":"ContainerStarted","Data":"8569c3793eb334184607d23f5aab8aef74560951d8618d003b17b886b6ca6126"} Jan 24 08:00:01 crc kubenswrapper[4675]: I0124 08:00:01.394218 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" event={"ID":"8eab032c-1f35-4afc-9150-acfc2580e18c","Type":"ContainerStarted","Data":"9698a28ed3ee898c418f1b2472ef6ed29685f6c54ca4cf70d8c46377f273cdc4"} Jan 24 08:00:01 crc kubenswrapper[4675]: I0124 08:00:01.422902 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" podStartSLOduration=1.422882209 podStartE2EDuration="1.422882209s" podCreationTimestamp="2026-01-24 08:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:00:01.418920462 +0000 UTC m=+4002.715025725" watchObservedRunningTime="2026-01-24 08:00:01.422882209 +0000 UTC m=+4002.718987442" Jan 24 08:00:02 crc kubenswrapper[4675]: I0124 08:00:02.404084 4675 generic.go:334] "Generic (PLEG): container finished" podID="8eab032c-1f35-4afc-9150-acfc2580e18c" containerID="8569c3793eb334184607d23f5aab8aef74560951d8618d003b17b886b6ca6126" exitCode=0 Jan 24 08:00:02 crc kubenswrapper[4675]: I0124 08:00:02.404223 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" event={"ID":"8eab032c-1f35-4afc-9150-acfc2580e18c","Type":"ContainerDied","Data":"8569c3793eb334184607d23f5aab8aef74560951d8618d003b17b886b6ca6126"} Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.784687 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.951807 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eab032c-1f35-4afc-9150-acfc2580e18c-secret-volume\") pod \"8eab032c-1f35-4afc-9150-acfc2580e18c\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.951958 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eab032c-1f35-4afc-9150-acfc2580e18c-config-volume\") pod \"8eab032c-1f35-4afc-9150-acfc2580e18c\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.952015 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktqw7\" (UniqueName: \"kubernetes.io/projected/8eab032c-1f35-4afc-9150-acfc2580e18c-kube-api-access-ktqw7\") pod \"8eab032c-1f35-4afc-9150-acfc2580e18c\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.952619 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eab032c-1f35-4afc-9150-acfc2580e18c-config-volume" (OuterVolumeSpecName: "config-volume") pod "8eab032c-1f35-4afc-9150-acfc2580e18c" (UID: "8eab032c-1f35-4afc-9150-acfc2580e18c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.958001 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eab032c-1f35-4afc-9150-acfc2580e18c-kube-api-access-ktqw7" (OuterVolumeSpecName: "kube-api-access-ktqw7") pod "8eab032c-1f35-4afc-9150-acfc2580e18c" (UID: "8eab032c-1f35-4afc-9150-acfc2580e18c"). InnerVolumeSpecName "kube-api-access-ktqw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.960035 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eab032c-1f35-4afc-9150-acfc2580e18c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8eab032c-1f35-4afc-9150-acfc2580e18c" (UID: "8eab032c-1f35-4afc-9150-acfc2580e18c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.053940 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eab032c-1f35-4afc-9150-acfc2580e18c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.053965 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktqw7\" (UniqueName: \"kubernetes.io/projected/8eab032c-1f35-4afc-9150-acfc2580e18c-kube-api-access-ktqw7\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.053975 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eab032c-1f35-4afc-9150-acfc2580e18c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.424567 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" event={"ID":"8eab032c-1f35-4afc-9150-acfc2580e18c","Type":"ContainerDied","Data":"9698a28ed3ee898c418f1b2472ef6ed29685f6c54ca4cf70d8c46377f273cdc4"} Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.424928 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9698a28ed3ee898c418f1b2472ef6ed29685f6c54ca4cf70d8c46377f273cdc4" Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.424623 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.894943 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz"] Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.919549 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz"] Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.960223 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="992bc9f8-4adf-4940-95d5-942895a4d935" path="/var/lib/kubelet/pods/992bc9f8-4adf-4940-95d5-942895a4d935/volumes" Jan 24 08:00:11 crc kubenswrapper[4675]: I0124 08:00:11.333395 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kdjm5_e08de50b-8092-4f29-b2a8-a391b4778142/control-plane-machine-set-operator/0.log" Jan 24 08:00:11 crc kubenswrapper[4675]: I0124 08:00:11.532709 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-577lm_bba258ca-d05a-417e-8a91-73e603062c20/kube-rbac-proxy/0.log" Jan 24 08:00:11 crc kubenswrapper[4675]: I0124 08:00:11.579251 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-577lm_bba258ca-d05a-417e-8a91-73e603062c20/machine-api-operator/0.log" Jan 24 08:00:26 crc kubenswrapper[4675]: I0124 08:00:26.003851 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-gt7xw_f9d3eaae-49ca-400c-a277-bdbad7f8125a/cert-manager-controller/0.log" Jan 24 08:00:26 crc kubenswrapper[4675]: I0124 08:00:26.189257 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6kp8k_99008be6-effb-4dc7-a761-ee291c03f093/cert-manager-cainjector/0.log" Jan 24 08:00:26 crc kubenswrapper[4675]: I0124 08:00:26.269683 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-lthpk_261785a7-b436-4597-a36b-473d27769006/cert-manager-webhook/0.log" Jan 24 08:00:41 crc kubenswrapper[4675]: I0124 08:00:41.727614 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-szblh_b289d862-4851-4f88-9a5b-4bed8cd70bd8/nmstate-console-plugin/0.log" Jan 24 08:00:41 crc kubenswrapper[4675]: I0124 08:00:41.879564 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ljst6_8c82b668-f857-4de6-a938-333a7e44591f/nmstate-handler/0.log" Jan 24 08:00:42 crc kubenswrapper[4675]: I0124 08:00:42.011936 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c56d8_56a6d660-7a53-4b25-b4e4-3d3f97a67430/nmstate-metrics/0.log" Jan 24 08:00:42 crc kubenswrapper[4675]: I0124 08:00:42.014240 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c56d8_56a6d660-7a53-4b25-b4e4-3d3f97a67430/kube-rbac-proxy/0.log" Jan 24 08:00:42 crc kubenswrapper[4675]: I0124 08:00:42.193378 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-dm24p_b344cabd-3dd6-4691-990b-045aaf4c622f/nmstate-operator/0.log" Jan 24 08:00:42 crc kubenswrapper[4675]: I0124 08:00:42.256429 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-77dfm_469eb31f-c261-4d7f-8a12-c10ed969bd55/nmstate-webhook/0.log" Jan 24 08:00:42 crc kubenswrapper[4675]: I0124 08:00:42.558925 4675 scope.go:117] "RemoveContainer" containerID="4ceca7bb4c3f8f330a726083a805861d2285d706134fb31908c2ce567855cf82" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.798382 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2t77m"] Jan 24 08:00:59 crc kubenswrapper[4675]: E0124 08:00:59.799304 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eab032c-1f35-4afc-9150-acfc2580e18c" containerName="collect-profiles" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.799318 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eab032c-1f35-4afc-9150-acfc2580e18c" containerName="collect-profiles" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.799540 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eab032c-1f35-4afc-9150-acfc2580e18c" containerName="collect-profiles" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.800831 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.814539 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2t77m"] Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.895626 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjkvd\" (UniqueName: \"kubernetes.io/projected/7b602238-1aa0-4cd2-837f-5ee81d490342-kube-api-access-cjkvd\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.895712 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-catalog-content\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.895852 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-utilities\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.997429 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-catalog-content\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.997538 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-utilities\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.997626 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjkvd\" (UniqueName: \"kubernetes.io/projected/7b602238-1aa0-4cd2-837f-5ee81d490342-kube-api-access-cjkvd\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.998028 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-catalog-content\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.998330 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-utilities\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.039472 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjkvd\" (UniqueName: \"kubernetes.io/projected/7b602238-1aa0-4cd2-837f-5ee81d490342-kube-api-access-cjkvd\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.120001 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.156659 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29487361-fcfvr"] Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.163029 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.164741 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29487361-fcfvr"] Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.311235 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-fernet-keys\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.311316 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-config-data\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.311341 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-combined-ca-bundle\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.311410 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l66qv\" (UniqueName: \"kubernetes.io/projected/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-kube-api-access-l66qv\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.412748 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-combined-ca-bundle\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.412863 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l66qv\" (UniqueName: \"kubernetes.io/projected/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-kube-api-access-l66qv\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.412932 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-fernet-keys\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.412986 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-config-data\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.420151 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-fernet-keys\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.420558 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-config-data\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.421209 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-combined-ca-bundle\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.440536 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l66qv\" (UniqueName: \"kubernetes.io/projected/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-kube-api-access-l66qv\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.551473 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:00.988079 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2t77m"] Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.193588 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29487361-fcfvr"] Jan 24 08:01:01 crc kubenswrapper[4675]: W0124 08:01:01.216418 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5ea98d0_9373_4962_ba8a_a79643b7fdf3.slice/crio-6d79aae69180c50e6b9cf5f8fe19e16f1642505105fd7a2b01812388fb484bd7 WatchSource:0}: Error finding container 6d79aae69180c50e6b9cf5f8fe19e16f1642505105fd7a2b01812388fb484bd7: Status 404 returned error can't find the container with id 6d79aae69180c50e6b9cf5f8fe19e16f1642505105fd7a2b01812388fb484bd7 Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.907762 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29487361-fcfvr" event={"ID":"f5ea98d0-9373-4962-ba8a-a79643b7fdf3","Type":"ContainerStarted","Data":"d301abf381fbe31993fa9dbe948cb5232c02e73f603e77a2063a184595eaa1c8"} Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.908012 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29487361-fcfvr" event={"ID":"f5ea98d0-9373-4962-ba8a-a79643b7fdf3","Type":"ContainerStarted","Data":"6d79aae69180c50e6b9cf5f8fe19e16f1642505105fd7a2b01812388fb484bd7"} Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.909613 4675 generic.go:334] "Generic (PLEG): container finished" podID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerID="936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801" exitCode=0 Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.909649 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerDied","Data":"936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801"} Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.909670 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerStarted","Data":"2013a713f73b80d7b45183fa61f7e77296379195089f4466ec9588945f1c9cfc"} Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.911138 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.929056 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29487361-fcfvr" podStartSLOduration=1.9290433519999999 podStartE2EDuration="1.929043352s" podCreationTimestamp="2026-01-24 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:01.925813234 +0000 UTC m=+4063.221918447" watchObservedRunningTime="2026-01-24 08:01:01.929043352 +0000 UTC m=+4063.225148575" Jan 24 08:01:02 crc kubenswrapper[4675]: I0124 08:01:02.919119 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerStarted","Data":"74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1"} Jan 24 08:01:03 crc kubenswrapper[4675]: I0124 08:01:03.928772 4675 generic.go:334] "Generic (PLEG): container finished" podID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerID="74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1" exitCode=0 Jan 24 08:01:03 crc kubenswrapper[4675]: I0124 08:01:03.928859 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerDied","Data":"74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1"} Jan 24 08:01:04 crc kubenswrapper[4675]: I0124 08:01:04.940040 4675 generic.go:334] "Generic (PLEG): container finished" podID="f5ea98d0-9373-4962-ba8a-a79643b7fdf3" containerID="d301abf381fbe31993fa9dbe948cb5232c02e73f603e77a2063a184595eaa1c8" exitCode=0 Jan 24 08:01:04 crc kubenswrapper[4675]: I0124 08:01:04.940112 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29487361-fcfvr" event={"ID":"f5ea98d0-9373-4962-ba8a-a79643b7fdf3","Type":"ContainerDied","Data":"d301abf381fbe31993fa9dbe948cb5232c02e73f603e77a2063a184595eaa1c8"} Jan 24 08:01:05 crc kubenswrapper[4675]: I0124 08:01:05.952265 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerStarted","Data":"9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753"} Jan 24 08:01:05 crc kubenswrapper[4675]: I0124 08:01:05.979325 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2t77m" podStartSLOduration=4.540052843 podStartE2EDuration="6.979310872s" podCreationTimestamp="2026-01-24 08:00:59 +0000 UTC" firstStartedPulling="2026-01-24 08:01:01.910931012 +0000 UTC m=+4063.207036235" lastFinishedPulling="2026-01-24 08:01:04.350189041 +0000 UTC m=+4065.646294264" observedRunningTime="2026-01-24 08:01:05.975145281 +0000 UTC m=+4067.271250504" watchObservedRunningTime="2026-01-24 08:01:05.979310872 +0000 UTC m=+4067.275416095" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.321582 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.446901 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-config-data\") pod \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.446950 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l66qv\" (UniqueName: \"kubernetes.io/projected/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-kube-api-access-l66qv\") pod \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.447024 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-fernet-keys\") pod \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.447927 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-combined-ca-bundle\") pod \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.452369 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-kube-api-access-l66qv" (OuterVolumeSpecName: "kube-api-access-l66qv") pod "f5ea98d0-9373-4962-ba8a-a79643b7fdf3" (UID: "f5ea98d0-9373-4962-ba8a-a79643b7fdf3"). InnerVolumeSpecName "kube-api-access-l66qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.461025 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f5ea98d0-9373-4962-ba8a-a79643b7fdf3" (UID: "f5ea98d0-9373-4962-ba8a-a79643b7fdf3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.490230 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5ea98d0-9373-4962-ba8a-a79643b7fdf3" (UID: "f5ea98d0-9373-4962-ba8a-a79643b7fdf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.519871 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-config-data" (OuterVolumeSpecName: "config-data") pod "f5ea98d0-9373-4962-ba8a-a79643b7fdf3" (UID: "f5ea98d0-9373-4962-ba8a-a79643b7fdf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.549884 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.549916 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l66qv\" (UniqueName: \"kubernetes.io/projected/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-kube-api-access-l66qv\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.549925 4675 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.549933 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.961451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29487361-fcfvr" event={"ID":"f5ea98d0-9373-4962-ba8a-a79643b7fdf3","Type":"ContainerDied","Data":"6d79aae69180c50e6b9cf5f8fe19e16f1642505105fd7a2b01812388fb484bd7"} Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.961489 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d79aae69180c50e6b9cf5f8fe19e16f1642505105fd7a2b01812388fb484bd7" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.962345 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:08 crc kubenswrapper[4675]: I0124 08:01:08.634028 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:01:08 crc kubenswrapper[4675]: I0124 08:01:08.634388 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:01:10 crc kubenswrapper[4675]: I0124 08:01:10.120699 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:10 crc kubenswrapper[4675]: I0124 08:01:10.122943 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:10 crc kubenswrapper[4675]: I0124 08:01:10.178074 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:11 crc kubenswrapper[4675]: I0124 08:01:11.055374 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:11 crc kubenswrapper[4675]: I0124 08:01:11.110874 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2t77m"] Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.019304 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2t77m" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="registry-server" containerID="cri-o://9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753" gracePeriod=2 Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.492952 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.597012 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-utilities\") pod \"7b602238-1aa0-4cd2-837f-5ee81d490342\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.597092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjkvd\" (UniqueName: \"kubernetes.io/projected/7b602238-1aa0-4cd2-837f-5ee81d490342-kube-api-access-cjkvd\") pod \"7b602238-1aa0-4cd2-837f-5ee81d490342\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.597141 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-catalog-content\") pod \"7b602238-1aa0-4cd2-837f-5ee81d490342\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.605522 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-utilities" (OuterVolumeSpecName: "utilities") pod "7b602238-1aa0-4cd2-837f-5ee81d490342" (UID: "7b602238-1aa0-4cd2-837f-5ee81d490342"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.625010 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b602238-1aa0-4cd2-837f-5ee81d490342-kube-api-access-cjkvd" (OuterVolumeSpecName: "kube-api-access-cjkvd") pod "7b602238-1aa0-4cd2-837f-5ee81d490342" (UID: "7b602238-1aa0-4cd2-837f-5ee81d490342"). InnerVolumeSpecName "kube-api-access-cjkvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.691709 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b602238-1aa0-4cd2-837f-5ee81d490342" (UID: "7b602238-1aa0-4cd2-837f-5ee81d490342"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.699455 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.699851 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjkvd\" (UniqueName: \"kubernetes.io/projected/7b602238-1aa0-4cd2-837f-5ee81d490342-kube-api-access-cjkvd\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.699867 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.028509 4675 generic.go:334] "Generic (PLEG): container finished" podID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerID="9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753" exitCode=0 Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.028553 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerDied","Data":"9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753"} Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.028580 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerDied","Data":"2013a713f73b80d7b45183fa61f7e77296379195089f4466ec9588945f1c9cfc"} Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.028597 4675 scope.go:117] "RemoveContainer" containerID="9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.028605 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.049169 4675 scope.go:117] "RemoveContainer" containerID="74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.069369 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2t77m"] Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.081116 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2t77m"] Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.413928 4675 scope.go:117] "RemoveContainer" containerID="936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.450502 4675 scope.go:117] "RemoveContainer" containerID="9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753" Jan 24 08:01:14 crc kubenswrapper[4675]: E0124 08:01:14.450980 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753\": container with ID starting with 9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753 not found: ID does not exist" containerID="9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.451024 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753"} err="failed to get container status \"9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753\": rpc error: code = NotFound desc = could not find container \"9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753\": container with ID starting with 9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753 not found: ID does not exist" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.451049 4675 scope.go:117] "RemoveContainer" containerID="74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1" Jan 24 08:01:14 crc kubenswrapper[4675]: E0124 08:01:14.451543 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1\": container with ID starting with 74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1 not found: ID does not exist" containerID="74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.451593 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1"} err="failed to get container status \"74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1\": rpc error: code = NotFound desc = could not find container \"74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1\": container with ID starting with 74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1 not found: ID does not exist" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.451628 4675 scope.go:117] "RemoveContainer" containerID="936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801" Jan 24 08:01:14 crc kubenswrapper[4675]: E0124 08:01:14.452021 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801\": container with ID starting with 936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801 not found: ID does not exist" containerID="936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.452050 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801"} err="failed to get container status \"936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801\": rpc error: code = NotFound desc = could not find container \"936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801\": container with ID starting with 936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801 not found: ID does not exist" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.951990 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" path="/var/lib/kubelet/pods/7b602238-1aa0-4cd2-837f-5ee81d490342/volumes" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.477235 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-c4k6t_af8e6625-69ed-4901-9577-65cc6fafe0d1/controller/0.log" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.487252 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-c4k6t_af8e6625-69ed-4901-9577-65cc6fafe0d1/kube-rbac-proxy/0.log" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.705176 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.851798 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.899818 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.910158 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.952461 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.093583 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.101218 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.102569 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.129582 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.301510 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.328356 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.355779 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.416254 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/controller/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.630317 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/kube-rbac-proxy/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.660495 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/frr-metrics/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.721022 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/kube-rbac-proxy-frr/0.log" Jan 24 08:01:17 crc kubenswrapper[4675]: I0124 08:01:17.015114 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/reloader/0.log" Jan 24 08:01:17 crc kubenswrapper[4675]: I0124 08:01:17.179344 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-skd24_032ac1eb-bb7f-4f94-b9ad-4d710032f3af/frr-k8s-webhook-server/0.log" Jan 24 08:01:17 crc kubenswrapper[4675]: I0124 08:01:17.437703 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-57d867674d-x4v6v_0cf0ee32-c416-4629-a441-268fbe054062/manager/0.log" Jan 24 08:01:17 crc kubenswrapper[4675]: I0124 08:01:17.629542 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/frr/0.log" Jan 24 08:01:17 crc kubenswrapper[4675]: I0124 08:01:17.809551 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f499b46f-tntmc_893cbc8e-86ae-4910-8693-061301da0ba6/webhook-server/0.log" Jan 24 08:01:17 crc kubenswrapper[4675]: I0124 08:01:17.952643 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5bpc7_21ad12ca-5157-4c19-9e8c-34fbe8fa9b96/kube-rbac-proxy/0.log" Jan 24 08:01:18 crc kubenswrapper[4675]: I0124 08:01:18.223347 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5bpc7_21ad12ca-5157-4c19-9e8c-34fbe8fa9b96/speaker/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.088794 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/util/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.279433 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/util/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.325734 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/pull/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.366712 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/pull/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.540430 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/util/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.581383 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/extract/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.590098 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/pull/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.752250 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/util/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.935225 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/util/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.977512 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/pull/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.022352 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/pull/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.188601 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/util/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.281972 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/pull/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.284315 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/extract/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.426388 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-utilities/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.652191 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-utilities/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.679478 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-content/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.685794 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-content/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.901845 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-content/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.928384 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-utilities/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.420515 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/registry-server/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.490969 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-utilities/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.619287 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-utilities/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.668835 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-content/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.689642 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-content/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.886797 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-content/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.991780 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-utilities/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.330945 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-utilities/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.383881 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-9cx7r_83c80cb7-74c3-417a-8d8e-54cdcf640b5b/marketplace-operator/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.579475 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/registry-server/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.623356 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-utilities/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.658175 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-content/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.751297 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-content/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.921216 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-utilities/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.018187 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-content/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.087735 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/registry-server/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.213120 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-utilities/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.380648 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-utilities/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.410784 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-content/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.424245 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-content/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.594406 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-utilities/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.629540 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.629603 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.696640 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-content/0.log" Jan 24 08:01:39 crc kubenswrapper[4675]: I0124 08:01:39.000569 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/registry-server/0.log" Jan 24 08:02:08 crc kubenswrapper[4675]: I0124 08:02:08.629914 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:02:08 crc kubenswrapper[4675]: I0124 08:02:08.630495 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:02:08 crc kubenswrapper[4675]: I0124 08:02:08.630559 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 08:02:08 crc kubenswrapper[4675]: I0124 08:02:08.631614 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be0174938bf27d2086139a9eb48453c049038be5f8027d938c36c6164eb025e0"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:02:08 crc kubenswrapper[4675]: I0124 08:02:08.631710 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://be0174938bf27d2086139a9eb48453c049038be5f8027d938c36c6164eb025e0" gracePeriod=600 Jan 24 08:02:09 crc kubenswrapper[4675]: I0124 08:02:09.998098 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="be0174938bf27d2086139a9eb48453c049038be5f8027d938c36c6164eb025e0" exitCode=0 Jan 24 08:02:09 crc kubenswrapper[4675]: I0124 08:02:09.998189 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"be0174938bf27d2086139a9eb48453c049038be5f8027d938c36c6164eb025e0"} Jan 24 08:02:09 crc kubenswrapper[4675]: I0124 08:02:09.998765 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 08:02:11 crc kubenswrapper[4675]: I0124 08:02:11.008631 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf"} Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.115357 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nqr4z"] Jan 24 08:03:08 crc kubenswrapper[4675]: E0124 08:03:08.117496 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="extract-utilities" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.117595 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="extract-utilities" Jan 24 08:03:08 crc kubenswrapper[4675]: E0124 08:03:08.117680 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ea98d0-9373-4962-ba8a-a79643b7fdf3" containerName="keystone-cron" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.117781 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ea98d0-9373-4962-ba8a-a79643b7fdf3" containerName="keystone-cron" Jan 24 08:03:08 crc kubenswrapper[4675]: E0124 08:03:08.117865 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="registry-server" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.117934 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="registry-server" Jan 24 08:03:08 crc kubenswrapper[4675]: E0124 08:03:08.118031 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="extract-content" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.118101 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="extract-content" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.118480 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="registry-server" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.118568 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ea98d0-9373-4962-ba8a-a79643b7fdf3" containerName="keystone-cron" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.120337 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.135817 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqr4z"] Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.281427 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwqtn\" (UniqueName: \"kubernetes.io/projected/3824da95-d177-420a-b366-01067cecb438-kube-api-access-rwqtn\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.281483 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-utilities\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.281574 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-catalog-content\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.383105 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-catalog-content\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.383829 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-catalog-content\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.384157 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwqtn\" (UniqueName: \"kubernetes.io/projected/3824da95-d177-420a-b366-01067cecb438-kube-api-access-rwqtn\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.384265 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-utilities\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.384598 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-utilities\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.403873 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwqtn\" (UniqueName: \"kubernetes.io/projected/3824da95-d177-420a-b366-01067cecb438-kube-api-access-rwqtn\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.440205 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.966418 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqr4z"] Jan 24 08:03:09 crc kubenswrapper[4675]: W0124 08:03:09.096370 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3824da95_d177_420a_b366_01067cecb438.slice/crio-db6f3583c004e241711716c217c79b1b3d53056eaf2ec5668569d89eefc738db WatchSource:0}: Error finding container db6f3583c004e241711716c217c79b1b3d53056eaf2ec5668569d89eefc738db: Status 404 returned error can't find the container with id db6f3583c004e241711716c217c79b1b3d53056eaf2ec5668569d89eefc738db Jan 24 08:03:09 crc kubenswrapper[4675]: I0124 08:03:09.557846 4675 generic.go:334] "Generic (PLEG): container finished" podID="3824da95-d177-420a-b366-01067cecb438" containerID="9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78" exitCode=0 Jan 24 08:03:09 crc kubenswrapper[4675]: I0124 08:03:09.557945 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqr4z" event={"ID":"3824da95-d177-420a-b366-01067cecb438","Type":"ContainerDied","Data":"9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78"} Jan 24 08:03:09 crc kubenswrapper[4675]: I0124 08:03:09.558252 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqr4z" event={"ID":"3824da95-d177-420a-b366-01067cecb438","Type":"ContainerStarted","Data":"db6f3583c004e241711716c217c79b1b3d53056eaf2ec5668569d89eefc738db"} Jan 24 08:03:11 crc kubenswrapper[4675]: I0124 08:03:11.578680 4675 generic.go:334] "Generic (PLEG): container finished" podID="3824da95-d177-420a-b366-01067cecb438" containerID="2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905" exitCode=0 Jan 24 08:03:11 crc kubenswrapper[4675]: I0124 08:03:11.578756 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqr4z" event={"ID":"3824da95-d177-420a-b366-01067cecb438","Type":"ContainerDied","Data":"2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905"} Jan 24 08:03:12 crc kubenswrapper[4675]: I0124 08:03:12.593661 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqr4z" event={"ID":"3824da95-d177-420a-b366-01067cecb438","Type":"ContainerStarted","Data":"bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130"} Jan 24 08:03:12 crc kubenswrapper[4675]: I0124 08:03:12.614529 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nqr4z" podStartSLOduration=2.08217616 podStartE2EDuration="4.614514727s" podCreationTimestamp="2026-01-24 08:03:08 +0000 UTC" firstStartedPulling="2026-01-24 08:03:09.559694795 +0000 UTC m=+4190.855800018" lastFinishedPulling="2026-01-24 08:03:12.092033362 +0000 UTC m=+4193.388138585" observedRunningTime="2026-01-24 08:03:12.611551925 +0000 UTC m=+4193.907657148" watchObservedRunningTime="2026-01-24 08:03:12.614514727 +0000 UTC m=+4193.910619950" Jan 24 08:03:18 crc kubenswrapper[4675]: I0124 08:03:18.440608 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:18 crc kubenswrapper[4675]: I0124 08:03:18.442101 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:18 crc kubenswrapper[4675]: I0124 08:03:18.855614 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:19 crc kubenswrapper[4675]: I0124 08:03:19.755103 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:19 crc kubenswrapper[4675]: I0124 08:03:19.813288 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqr4z"] Jan 24 08:03:21 crc kubenswrapper[4675]: I0124 08:03:21.699237 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nqr4z" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="registry-server" containerID="cri-o://bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130" gracePeriod=2 Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.190997 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.263438 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-catalog-content\") pod \"3824da95-d177-420a-b366-01067cecb438\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.263499 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwqtn\" (UniqueName: \"kubernetes.io/projected/3824da95-d177-420a-b366-01067cecb438-kube-api-access-rwqtn\") pod \"3824da95-d177-420a-b366-01067cecb438\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.263813 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-utilities\") pod \"3824da95-d177-420a-b366-01067cecb438\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.264791 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-utilities" (OuterVolumeSpecName: "utilities") pod "3824da95-d177-420a-b366-01067cecb438" (UID: "3824da95-d177-420a-b366-01067cecb438"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.269909 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3824da95-d177-420a-b366-01067cecb438-kube-api-access-rwqtn" (OuterVolumeSpecName: "kube-api-access-rwqtn") pod "3824da95-d177-420a-b366-01067cecb438" (UID: "3824da95-d177-420a-b366-01067cecb438"). InnerVolumeSpecName "kube-api-access-rwqtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.285593 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3824da95-d177-420a-b366-01067cecb438" (UID: "3824da95-d177-420a-b366-01067cecb438"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.366028 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.366091 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwqtn\" (UniqueName: \"kubernetes.io/projected/3824da95-d177-420a-b366-01067cecb438-kube-api-access-rwqtn\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.366105 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.717556 4675 generic.go:334] "Generic (PLEG): container finished" podID="3824da95-d177-420a-b366-01067cecb438" containerID="bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130" exitCode=0 Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.717615 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.717624 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqr4z" event={"ID":"3824da95-d177-420a-b366-01067cecb438","Type":"ContainerDied","Data":"bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130"} Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.717649 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqr4z" event={"ID":"3824da95-d177-420a-b366-01067cecb438","Type":"ContainerDied","Data":"db6f3583c004e241711716c217c79b1b3d53056eaf2ec5668569d89eefc738db"} Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.717679 4675 scope.go:117] "RemoveContainer" containerID="bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.753086 4675 scope.go:117] "RemoveContainer" containerID="2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.762084 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqr4z"] Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.772646 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqr4z"] Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.795182 4675 scope.go:117] "RemoveContainer" containerID="9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.817366 4675 scope.go:117] "RemoveContainer" containerID="bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130" Jan 24 08:03:22 crc kubenswrapper[4675]: E0124 08:03:22.817762 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130\": container with ID starting with bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130 not found: ID does not exist" containerID="bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.817818 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130"} err="failed to get container status \"bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130\": rpc error: code = NotFound desc = could not find container \"bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130\": container with ID starting with bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130 not found: ID does not exist" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.817850 4675 scope.go:117] "RemoveContainer" containerID="2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905" Jan 24 08:03:22 crc kubenswrapper[4675]: E0124 08:03:22.818210 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905\": container with ID starting with 2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905 not found: ID does not exist" containerID="2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.818246 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905"} err="failed to get container status \"2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905\": rpc error: code = NotFound desc = could not find container \"2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905\": container with ID starting with 2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905 not found: ID does not exist" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.818271 4675 scope.go:117] "RemoveContainer" containerID="9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78" Jan 24 08:03:22 crc kubenswrapper[4675]: E0124 08:03:22.818616 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78\": container with ID starting with 9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78 not found: ID does not exist" containerID="9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.818645 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78"} err="failed to get container status \"9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78\": rpc error: code = NotFound desc = could not find container \"9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78\": container with ID starting with 9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78 not found: ID does not exist" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.973525 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3824da95-d177-420a-b366-01067cecb438" path="/var/lib/kubelet/pods/3824da95-d177-420a-b366-01067cecb438/volumes" Jan 24 08:03:41 crc kubenswrapper[4675]: I0124 08:03:41.906952 4675 generic.go:334] "Generic (PLEG): container finished" podID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerID="311c1a82bd98e8c72527e16eeaa3da0e561f2709ffd2315fbe82f41ebd0fd526" exitCode=0 Jan 24 08:03:41 crc kubenswrapper[4675]: I0124 08:03:41.907077 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/must-gather-64vd7" event={"ID":"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5","Type":"ContainerDied","Data":"311c1a82bd98e8c72527e16eeaa3da0e561f2709ffd2315fbe82f41ebd0fd526"} Jan 24 08:03:41 crc kubenswrapper[4675]: I0124 08:03:41.908689 4675 scope.go:117] "RemoveContainer" containerID="311c1a82bd98e8c72527e16eeaa3da0e561f2709ffd2315fbe82f41ebd0fd526" Jan 24 08:03:42 crc kubenswrapper[4675]: I0124 08:03:42.432768 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-958tl_must-gather-64vd7_e6b41fa9-a3d8-403c-8aa7-8da5af8796b5/gather/0.log" Jan 24 08:03:42 crc kubenswrapper[4675]: I0124 08:03:42.650234 4675 scope.go:117] "RemoveContainer" containerID="24b7020c305a4ecc19f86c8c4874d01aef9f91367091de56d516f83c37e8dff9" Jan 24 08:03:54 crc kubenswrapper[4675]: I0124 08:03:54.657439 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-958tl/must-gather-64vd7"] Jan 24 08:03:54 crc kubenswrapper[4675]: I0124 08:03:54.658322 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-958tl/must-gather-64vd7" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="copy" containerID="cri-o://c39dbccf2355a3e98d8ea1c8895d61f5a0774063cc91871a2a28f48106ca8401" gracePeriod=2 Jan 24 08:03:54 crc kubenswrapper[4675]: I0124 08:03:54.670435 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-958tl/must-gather-64vd7"] Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.032721 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-958tl_must-gather-64vd7_e6b41fa9-a3d8-403c-8aa7-8da5af8796b5/copy/0.log" Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.033303 4675 generic.go:334] "Generic (PLEG): container finished" podID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerID="c39dbccf2355a3e98d8ea1c8895d61f5a0774063cc91871a2a28f48106ca8401" exitCode=143 Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.609075 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-958tl_must-gather-64vd7_e6b41fa9-a3d8-403c-8aa7-8da5af8796b5/copy/0.log" Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.609756 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.722895 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-must-gather-output\") pod \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.722977 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zghsj\" (UniqueName: \"kubernetes.io/projected/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-kube-api-access-zghsj\") pod \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.737931 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-kube-api-access-zghsj" (OuterVolumeSpecName: "kube-api-access-zghsj") pod "e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" (UID: "e6b41fa9-a3d8-403c-8aa7-8da5af8796b5"). InnerVolumeSpecName "kube-api-access-zghsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.825884 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zghsj\" (UniqueName: \"kubernetes.io/projected/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-kube-api-access-zghsj\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.877729 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" (UID: "e6b41fa9-a3d8-403c-8aa7-8da5af8796b5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.928838 4675 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:56 crc kubenswrapper[4675]: I0124 08:03:56.045075 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-958tl_must-gather-64vd7_e6b41fa9-a3d8-403c-8aa7-8da5af8796b5/copy/0.log" Jan 24 08:03:56 crc kubenswrapper[4675]: I0124 08:03:56.045652 4675 scope.go:117] "RemoveContainer" containerID="c39dbccf2355a3e98d8ea1c8895d61f5a0774063cc91871a2a28f48106ca8401" Jan 24 08:03:56 crc kubenswrapper[4675]: I0124 08:03:56.045736 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 08:03:56 crc kubenswrapper[4675]: I0124 08:03:56.079111 4675 scope.go:117] "RemoveContainer" containerID="311c1a82bd98e8c72527e16eeaa3da0e561f2709ffd2315fbe82f41ebd0fd526" Jan 24 08:03:56 crc kubenswrapper[4675]: I0124 08:03:56.963176 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" path="/var/lib/kubelet/pods/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5/volumes" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.636454 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sdxbg"] Jan 24 08:04:07 crc kubenswrapper[4675]: E0124 08:04:07.637449 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="gather" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637463 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="gather" Jan 24 08:04:07 crc kubenswrapper[4675]: E0124 08:04:07.637502 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="registry-server" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637510 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="registry-server" Jan 24 08:04:07 crc kubenswrapper[4675]: E0124 08:04:07.637530 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="extract-content" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637538 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="extract-content" Jan 24 08:04:07 crc kubenswrapper[4675]: E0124 08:04:07.637561 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="extract-utilities" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637569 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="extract-utilities" Jan 24 08:04:07 crc kubenswrapper[4675]: E0124 08:04:07.637582 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="copy" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637591 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="copy" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637840 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="copy" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637857 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="gather" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637880 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="registry-server" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.639483 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.649022 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdxbg"] Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.812275 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-utilities\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.812321 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-catalog-content\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.812408 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhfnl\" (UniqueName: \"kubernetes.io/projected/33cfc116-7294-4c74-89f4-e4f3417da631-kube-api-access-qhfnl\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.913840 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-utilities\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.913884 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-catalog-content\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.914011 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhfnl\" (UniqueName: \"kubernetes.io/projected/33cfc116-7294-4c74-89f4-e4f3417da631-kube-api-access-qhfnl\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.914911 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-utilities\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.915179 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-catalog-content\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.944515 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhfnl\" (UniqueName: \"kubernetes.io/projected/33cfc116-7294-4c74-89f4-e4f3417da631-kube-api-access-qhfnl\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.962069 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:08 crc kubenswrapper[4675]: I0124 08:04:08.448921 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdxbg"] Jan 24 08:04:09 crc kubenswrapper[4675]: I0124 08:04:09.168923 4675 generic.go:334] "Generic (PLEG): container finished" podID="33cfc116-7294-4c74-89f4-e4f3417da631" containerID="aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474" exitCode=0 Jan 24 08:04:09 crc kubenswrapper[4675]: I0124 08:04:09.168975 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerDied","Data":"aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474"} Jan 24 08:04:09 crc kubenswrapper[4675]: I0124 08:04:09.169171 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerStarted","Data":"cd9457c3590ed26f1c22e0f13467d2d764d10baee5be2a45e9a9de71dbebc229"} Jan 24 08:04:11 crc kubenswrapper[4675]: I0124 08:04:11.191368 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerStarted","Data":"7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8"} Jan 24 08:04:14 crc kubenswrapper[4675]: I0124 08:04:14.216093 4675 generic.go:334] "Generic (PLEG): container finished" podID="33cfc116-7294-4c74-89f4-e4f3417da631" containerID="7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8" exitCode=0 Jan 24 08:04:14 crc kubenswrapper[4675]: I0124 08:04:14.216158 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerDied","Data":"7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8"} Jan 24 08:04:15 crc kubenswrapper[4675]: I0124 08:04:15.228186 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerStarted","Data":"9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d"} Jan 24 08:04:15 crc kubenswrapper[4675]: I0124 08:04:15.255648 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sdxbg" podStartSLOduration=2.759804538 podStartE2EDuration="8.255629801s" podCreationTimestamp="2026-01-24 08:04:07 +0000 UTC" firstStartedPulling="2026-01-24 08:04:09.170659136 +0000 UTC m=+4250.466764379" lastFinishedPulling="2026-01-24 08:04:14.666484409 +0000 UTC m=+4255.962589642" observedRunningTime="2026-01-24 08:04:15.250531398 +0000 UTC m=+4256.546636621" watchObservedRunningTime="2026-01-24 08:04:15.255629801 +0000 UTC m=+4256.551735024" Jan 24 08:04:17 crc kubenswrapper[4675]: I0124 08:04:17.963155 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:17 crc kubenswrapper[4675]: I0124 08:04:17.963493 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:19 crc kubenswrapper[4675]: I0124 08:04:19.024359 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sdxbg" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="registry-server" probeResult="failure" output=< Jan 24 08:04:19 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 08:04:19 crc kubenswrapper[4675]: > Jan 24 08:04:28 crc kubenswrapper[4675]: I0124 08:04:28.031706 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:28 crc kubenswrapper[4675]: I0124 08:04:28.106440 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:28 crc kubenswrapper[4675]: I0124 08:04:28.279115 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdxbg"] Jan 24 08:04:29 crc kubenswrapper[4675]: I0124 08:04:29.360065 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sdxbg" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="registry-server" containerID="cri-o://9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d" gracePeriod=2 Jan 24 08:04:29 crc kubenswrapper[4675]: I0124 08:04:29.867865 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:29 crc kubenswrapper[4675]: I0124 08:04:29.982951 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhfnl\" (UniqueName: \"kubernetes.io/projected/33cfc116-7294-4c74-89f4-e4f3417da631-kube-api-access-qhfnl\") pod \"33cfc116-7294-4c74-89f4-e4f3417da631\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " Jan 24 08:04:29 crc kubenswrapper[4675]: I0124 08:04:29.983232 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-utilities\") pod \"33cfc116-7294-4c74-89f4-e4f3417da631\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " Jan 24 08:04:29 crc kubenswrapper[4675]: I0124 08:04:29.983560 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-catalog-content\") pod \"33cfc116-7294-4c74-89f4-e4f3417da631\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " Jan 24 08:04:29 crc kubenswrapper[4675]: I0124 08:04:29.984116 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-utilities" (OuterVolumeSpecName: "utilities") pod "33cfc116-7294-4c74-89f4-e4f3417da631" (UID: "33cfc116-7294-4c74-89f4-e4f3417da631"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.015950 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33cfc116-7294-4c74-89f4-e4f3417da631-kube-api-access-qhfnl" (OuterVolumeSpecName: "kube-api-access-qhfnl") pod "33cfc116-7294-4c74-89f4-e4f3417da631" (UID: "33cfc116-7294-4c74-89f4-e4f3417da631"). InnerVolumeSpecName "kube-api-access-qhfnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.086419 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhfnl\" (UniqueName: \"kubernetes.io/projected/33cfc116-7294-4c74-89f4-e4f3417da631-kube-api-access-qhfnl\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.086457 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.110207 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33cfc116-7294-4c74-89f4-e4f3417da631" (UID: "33cfc116-7294-4c74-89f4-e4f3417da631"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.188227 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.370051 4675 generic.go:334] "Generic (PLEG): container finished" podID="33cfc116-7294-4c74-89f4-e4f3417da631" containerID="9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d" exitCode=0 Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.370115 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.370135 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerDied","Data":"9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d"} Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.371830 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerDied","Data":"cd9457c3590ed26f1c22e0f13467d2d764d10baee5be2a45e9a9de71dbebc229"} Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.371867 4675 scope.go:117] "RemoveContainer" containerID="9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.394456 4675 scope.go:117] "RemoveContainer" containerID="7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.429444 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdxbg"] Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.432064 4675 scope.go:117] "RemoveContainer" containerID="aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.440342 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sdxbg"] Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.459943 4675 scope.go:117] "RemoveContainer" containerID="9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d" Jan 24 08:04:30 crc kubenswrapper[4675]: E0124 08:04:30.460431 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d\": container with ID starting with 9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d not found: ID does not exist" containerID="9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.460477 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d"} err="failed to get container status \"9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d\": rpc error: code = NotFound desc = could not find container \"9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d\": container with ID starting with 9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d not found: ID does not exist" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.460503 4675 scope.go:117] "RemoveContainer" containerID="7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8" Jan 24 08:04:30 crc kubenswrapper[4675]: E0124 08:04:30.460911 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8\": container with ID starting with 7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8 not found: ID does not exist" containerID="7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.460951 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8"} err="failed to get container status \"7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8\": rpc error: code = NotFound desc = could not find container \"7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8\": container with ID starting with 7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8 not found: ID does not exist" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.460980 4675 scope.go:117] "RemoveContainer" containerID="aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474" Jan 24 08:04:30 crc kubenswrapper[4675]: E0124 08:04:30.461211 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474\": container with ID starting with aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474 not found: ID does not exist" containerID="aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.461246 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474"} err="failed to get container status \"aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474\": rpc error: code = NotFound desc = could not find container \"aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474\": container with ID starting with aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474 not found: ID does not exist" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.956354 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" path="/var/lib/kubelet/pods/33cfc116-7294-4c74-89f4-e4f3417da631/volumes" Jan 24 08:04:38 crc kubenswrapper[4675]: I0124 08:04:38.630072 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:04:38 crc kubenswrapper[4675]: I0124 08:04:38.630965 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:04:42 crc kubenswrapper[4675]: I0124 08:04:42.751146 4675 scope.go:117] "RemoveContainer" containerID="f73acc78d209e642db0475021e83c828f28a3e3fcb9f35022e1d491b3eba45ef" Jan 24 08:05:08 crc kubenswrapper[4675]: I0124 08:05:08.630141 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:05:08 crc kubenswrapper[4675]: I0124 08:05:08.630587 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:05:17 crc kubenswrapper[4675]: I0124 08:05:17.787483 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="009254f3-9d76-4d89-8e35-d2b4c4be0da8" containerName="galera" probeResult="failure" output="command timed out" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.219015 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rvq9q"] Jan 24 08:05:25 crc kubenswrapper[4675]: E0124 08:05:25.219912 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="extract-utilities" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.219925 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="extract-utilities" Jan 24 08:05:25 crc kubenswrapper[4675]: E0124 08:05:25.219940 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="registry-server" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.219946 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="registry-server" Jan 24 08:05:25 crc kubenswrapper[4675]: E0124 08:05:25.219963 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="extract-content" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.219969 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="extract-content" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.220174 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="registry-server" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.221794 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.235621 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvq9q"] Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.361573 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5vfx\" (UniqueName: \"kubernetes.io/projected/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-kube-api-access-n5vfx\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.361652 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-utilities\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:25.361675 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-catalog-content\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.120710 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5vfx\" (UniqueName: \"kubernetes.io/projected/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-kube-api-access-n5vfx\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.120812 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-utilities\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.120853 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-catalog-content\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.121416 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-utilities\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.121439 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-catalog-content\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.138747 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5vfx\" (UniqueName: \"kubernetes.io/projected/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-kube-api-access-n5vfx\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.236413 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.994384 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvq9q"] Jan 24 08:05:28 crc kubenswrapper[4675]: I0124 08:05:28.257918 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq9q" event={"ID":"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad","Type":"ContainerStarted","Data":"66b1f8af893ade8bcafc88c7de20f7e2d80e545ccaa6b0d7be75135aa388898b"} Jan 24 08:05:30 crc kubenswrapper[4675]: I0124 08:05:30.275950 4675 generic.go:334] "Generic (PLEG): container finished" podID="f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" containerID="76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754" exitCode=0 Jan 24 08:05:30 crc kubenswrapper[4675]: I0124 08:05:30.276037 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq9q" event={"ID":"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad","Type":"ContainerDied","Data":"76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754"} Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.349627 4675 generic.go:334] "Generic (PLEG): container finished" podID="f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" containerID="f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3" exitCode=0 Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.349693 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq9q" event={"ID":"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad","Type":"ContainerDied","Data":"f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3"} Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.630073 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.630137 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.630180 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.631014 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.631095 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" gracePeriod=600 Jan 24 08:05:39 crc kubenswrapper[4675]: I0124 08:05:39.361089 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" exitCode=0 Jan 24 08:05:39 crc kubenswrapper[4675]: I0124 08:05:39.361131 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf"} Jan 24 08:05:39 crc kubenswrapper[4675]: I0124 08:05:39.361403 4675 scope.go:117] "RemoveContainer" containerID="be0174938bf27d2086139a9eb48453c049038be5f8027d938c36c6164eb025e0" Jan 24 08:05:40 crc kubenswrapper[4675]: E0124 08:05:40.384870 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 08:05:41 crc kubenswrapper[4675]: I0124 08:05:41.383694 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq9q" event={"ID":"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad","Type":"ContainerStarted","Data":"2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4"} Jan 24 08:05:41 crc kubenswrapper[4675]: I0124 08:05:41.384213 4675 scope.go:117] "RemoveContainer" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" Jan 24 08:05:41 crc kubenswrapper[4675]: E0124 08:05:41.384441 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 08:05:42 crc kubenswrapper[4675]: I0124 08:05:42.418326 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rvq9q" podStartSLOduration=8.144580137 podStartE2EDuration="17.418309277s" podCreationTimestamp="2026-01-24 08:05:25 +0000 UTC" firstStartedPulling="2026-01-24 08:05:31.285639867 +0000 UTC m=+4332.581745090" lastFinishedPulling="2026-01-24 08:05:40.559369007 +0000 UTC m=+4341.855474230" observedRunningTime="2026-01-24 08:05:42.408987751 +0000 UTC m=+4343.705092974" watchObservedRunningTime="2026-01-24 08:05:42.418309277 +0000 UTC m=+4343.714414500" Jan 24 08:05:46 crc kubenswrapper[4675]: I0124 08:05:46.237322 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:46 crc kubenswrapper[4675]: I0124 08:05:46.239822 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:46 crc kubenswrapper[4675]: I0124 08:05:46.307405 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:46 crc kubenswrapper[4675]: I0124 08:05:46.511303 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:46 crc kubenswrapper[4675]: I0124 08:05:46.578186 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvq9q"] Jan 24 08:05:48 crc kubenswrapper[4675]: I0124 08:05:48.450510 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rvq9q" podUID="f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" containerName="registry-server" containerID="cri-o://2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4" gracePeriod=2 Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.430153 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.467045 4675 generic.go:334] "Generic (PLEG): container finished" podID="f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" containerID="2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4" exitCode=0 Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.467083 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq9q" event={"ID":"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad","Type":"ContainerDied","Data":"2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4"} Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.467103 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq9q" event={"ID":"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad","Type":"ContainerDied","Data":"66b1f8af893ade8bcafc88c7de20f7e2d80e545ccaa6b0d7be75135aa388898b"} Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.467119 4675 scope.go:117] "RemoveContainer" containerID="2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.467215 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.492835 4675 scope.go:117] "RemoveContainer" containerID="f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.526170 4675 scope.go:117] "RemoveContainer" containerID="76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.534362 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-catalog-content\") pod \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.534410 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5vfx\" (UniqueName: \"kubernetes.io/projected/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-kube-api-access-n5vfx\") pod \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.534687 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-utilities\") pod \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.536613 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-utilities" (OuterVolumeSpecName: "utilities") pod "f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" (UID: "f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.538984 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.543054 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-kube-api-access-n5vfx" (OuterVolumeSpecName: "kube-api-access-n5vfx") pod "f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" (UID: "f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad"). InnerVolumeSpecName "kube-api-access-n5vfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.598264 4675 scope.go:117] "RemoveContainer" containerID="2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4" Jan 24 08:05:49 crc kubenswrapper[4675]: E0124 08:05:49.598746 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4\": container with ID starting with 2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4 not found: ID does not exist" containerID="2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.598810 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4"} err="failed to get container status \"2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4\": rpc error: code = NotFound desc = could not find container \"2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4\": container with ID starting with 2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4 not found: ID does not exist" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.598832 4675 scope.go:117] "RemoveContainer" containerID="f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3" Jan 24 08:05:49 crc kubenswrapper[4675]: E0124 08:05:49.599243 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3\": container with ID starting with f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3 not found: ID does not exist" containerID="f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.599268 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3"} err="failed to get container status \"f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3\": rpc error: code = NotFound desc = could not find container \"f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3\": container with ID starting with f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3 not found: ID does not exist" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.599285 4675 scope.go:117] "RemoveContainer" containerID="76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754" Jan 24 08:05:49 crc kubenswrapper[4675]: E0124 08:05:49.599555 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754\": container with ID starting with 76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754 not found: ID does not exist" containerID="76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.599590 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754"} err="failed to get container status \"76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754\": rpc error: code = NotFound desc = could not find container \"76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754\": container with ID starting with 76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754 not found: ID does not exist" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.601783 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" (UID: "f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.641025 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.641055 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5vfx\" (UniqueName: \"kubernetes.io/projected/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-kube-api-access-n5vfx\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.822901 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvq9q"] Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.834244 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rvq9q"] Jan 24 08:05:50 crc kubenswrapper[4675]: I0124 08:05:50.952741 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" path="/var/lib/kubelet/pods/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad/volumes" Jan 24 08:05:55 crc kubenswrapper[4675]: I0124 08:05:55.942607 4675 scope.go:117] "RemoveContainer" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" Jan 24 08:05:55 crc kubenswrapper[4675]: E0124 08:05:55.943661 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 08:06:07 crc kubenswrapper[4675]: I0124 08:06:07.943020 4675 scope.go:117] "RemoveContainer" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" Jan 24 08:06:07 crc kubenswrapper[4675]: E0124 08:06:07.943998 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 08:06:21 crc kubenswrapper[4675]: I0124 08:06:21.942533 4675 scope.go:117] "RemoveContainer" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" Jan 24 08:06:21 crc kubenswrapper[4675]: E0124 08:06:21.943355 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5"